Skip to main content
IET Systems Biology logoLink to IET Systems Biology
. 2014 Dec 1;8(6):251–267. doi: 10.1049/iet-syb.2014.0025

A technique for determining the signs of sensitivities of steady states in chemical reaction networks

Eduardo D Sontag 1,
PMCID: PMC5653976  NIHMSID: NIHMS913244  PMID: 25478700

Abstract

This paper studies the direction of change of steady states to parameter perturbations in chemical reaction networks, and, in particular, to changes in conserved quantities. Theoretical considerations lead to the formulation of a computational procedure that provides a set of possible signs of such sensitivities. The procedure is purely algebraic and combinatorial, only using information on stoichiometry, and is independent of the values of kinetic constants. Three examples of important intracellular signal transduction models are worked out as an illustration. In these examples, the set of signs found is minimal, but there is no general guarantee that the set found will always be minimal in other examples. The paper also briefly discusses the relationship of the sign problem to the question of uniqueness of steady states in stoichiometry classes.

Inspec keywords: chemical reactions, biochemistry, cellular biophysics, algebra, combinatorial mathematics, stoichiometry

Other keywords: steady states, chemical reaction networks, parameter perturbations, algebraic method, combinatorial method, stoichiometry, intracellular signal transduction models

1 Introduction

A key question in the mathematical analysis of chemical reaction networks is the characterisation of sensitivities of steady states to parameter perturbations [17]. In the time scale of cellular signalling, assuming no turn‐over due to expression and degradation or dilution, one such parameter could be, for example, the total concentration of a certain enzyme in its various activity states. The value of this parameter might be manipulated experimentally in various forms in order to achieve knock‐downs or up‐regulation. Often, especially in the context of inhibitors for therapeutic purposes, it is desirable to be able to predict the sign of the effect of such perturbations on states, in a manner that depends only on the structure of the network of reactions and not on the actual values of other parameters, such as kinetic constants, which are typically very poorly characterised.

1.1 An example

We introduce the problem to be studied through an example, an enzymatic network consisting of a cascade of two reversible covalent modifications, see Fig. 1.

Fig. 1.

Fig. 1

An enzymatic cascade

Specifically, we consider the following reaction network:

M0+EAM1+EM1+GBM0+GN0+M1CN1+M1N1+FDN0+F (1)

Here E is a constitutively active kinase which drives a phosphorylation reaction in which a substrate M 0 is converted to an active form M 1, which can be dephosphorylated back into inactive form by a constitutively active phosphatase G. There are two intermediate enzyme–substrate complexes, A = M 0 E and B = M 1 G, for these enzymatic reactions. The active form M 1 is itself a kinase which drives a phosphorylation reaction in which a second substrate N 0 is converted to an active form N 1, which can be dephosphorylated back into inactive form by a constitutively active phosphatase F. There are also two intermediate enzyme–substrate complexes, C = N 0 M 1 and D = N 1 F, for these last enzymatic reactions. In cell signalling, one typically views (1) as a cascade of a subsystem described by the first group of reactions, involving the enzyme M in its various forms, and a second subsystem described by N in its various forms, as diagrammed in Fig. 1.

An instance of great biological interest is provided by the proteins from MAPK/ERK pathways. There are several different MAPK (“mitogen‐activated protein kinase”) pathways, in each cell of a given organism, as well as in cells of different organisms, but they all share the same basic architecture, comprising a set of phosphorylation/dephosphorylation covalent modification cycles (sometimes with multiple phosphorylation steps in each subsystem). They are found in all eukaryotes [811], and are key participants in the regulation of some of the most important cell processes, from cell division and gene expression to differentiation and apoptosis. The targeting of MAPK/ERK components is the focus of current‐generation drugs to treat advanced melanomas and a wide range of other tumours, including lung and thyroid cancers [12]. Normally, there are three, rather than two, components to MAPK cascades, corresponding to proteins generically called MAPK, MAPKK (“MAPK kinase”), and MAPKKK (“MAPKK kinase”), a typical example being given by Erk, Mek, and Ras respectively. Fig. 2 shows several typical MAPK pathways in mammalian cells. In our example, “M ” and “N ” could be MAPKK and MAPK respectively, or MAPKKK and MAPKK respectively.

Fig. 2.

Fig. 2

MAPK pathways in mammalian cells

Image freely licensed from creative commons media file repository

Let us denote the concentrations of the various species in (1) using the corresponding lower case letters

(e,m0,a,m1,g,b,n0,c,n1,f,d)

We assume mass action kinetics for each reaction, and an ordinary differential equation (ODE) model. For example, the forward reaction M0+EA will proceed at a rate km0a, where k>0 is a kinetic constant, so a differential equation for the state component e will have a term “ km0a ”. A set of independent conservation laws (corresponding to a basis of the left nullspace of the stoichiometry matrix in the usual sense, see Appendix) for this system is given by:

e+a=ET (2)
g+b=GT (3)
f+d=FT (4)
m0+a+m1+b+c=MT (5)

and

n0+c+n1+d=NT (6)

We may think of ET as total amount of constitutively active enzyme (bound or in complex), GT and FT as total amount of phosphatases, MT as total amount of the first substrate in all free and bound forms, and NT as total amount of the second substrate in all free and bound forms.

The question that we wish to study is: how do steady states of the system change upon a change in one of these conserved quantities? Our interest is especially in understanding how, for example, a variation in the total amount of the second phosphatase FT “backwards” affects the steady‐state of the first component of the system. Specifically, we wish to determine the direction of change (increase of decrease) in individual steady‐state components when such a parameter is perturbed. Moreover, we would like to find information that is robust to the actual values of kinetic constants in each reaction. Experimental perturbations of quantities such as FT are implemented, in practice, through genetic, biochemical, and physical methods, including small‐molecule kinase inhibitors, changes in gene expression, repression of transcription by siRNA's, or laser trapping with optical tweezers.

Of course, there are no true “forward” and “backward” directions: the system is tightly connected, and the input/output formalism of control theory is inadequate as a paradigm (a point that was much emphasised by Willems in his work on behavioural foundations of systems theory [13]). Nonetheless, the idea of unidirectional information flow in MAPK and other cascades is well‐established and has biological substance, through the ultimate transfer of information from cell surface receptors to gene expression. This question of “backward propagation” of effects has been the subject of considerable research in the context of modularity of biological systems [14, 15] and, specifically, in the context of the “retroactivity” phenomenon [1623]. Retroactivity is a fundamental systems‐engineering issue that arises when interconnecting biological subsystems, just as with electrical or mechanical systems: the effect of “loads” on the “output” of a system in effect creates biochemical “impedance” connections that are not obvious from a unidirectional signal‐flow view of information processing.

When we apply the theory to be developed in this paper, we find that perturbations of the total second substrate, NT, and perturbations of the second phosphatase, FT, both lead to changes in “upstream” steady states. This is an instance of the retroactivity phenomenon. More interestingly, these two types of perturbations of the “downstream” layer have opposite effects on steady‐state concentrations. This prediction has been tested experimentally and found to be correct [7].

We now turn to a precise problem statement, theoretical developments, and the description of an algorithm that addresses the question of directionality of changes in steady states upon parameter perturbations. We have developed a MATLAB® script, “CRNSESI” (Chemical Reaction Network SEnsitivity SIgns) that implements our procedure. After this, we return to the motivating example and display the signs of state changes for perturbations in each of the conserved quantities, as obtained from the use of CRNSESI. While in this example it turns out that the signs of state variations are unambiguously determined, such is not the case with other examples. To illustrate this lack of uniqueness, we provide a second example, a simple model of a phosphotransfer system, that exhibits ambiguity in one of the state components.

2 Preliminaries: general systems

We start with arbitrary systems of ODEs

x˙(t)=f(x(t)) (7)

The vectors x are assumed to lie in the positive orthant R+nS of RnS, that is, x=(x1,,xnS)T with each xi>0, and f is a differentiable vector field, mapping R+nS into RnS. We later specialise to ODEs that describe chemical reaction networks (CRNs), for which the abstract procedure to be described next can be made computationally explicit. In the latter context, we think of the coordinates xi(t) of x as describing the concentrations of various chemical species Si, i=1,,nS.

Suppose that xλ describes a λ ‐parametrised smooth curve of steady states for the system (7), where λ is a scalar parameter ranging over some open interval Λ. The steady‐state condition amounts to asking that

f(xλ)=0 (8)

for all values of the parameter λΛ.

In addition to (8), we also assume that the steady states of interest are constrained by a set of algebraic equations

g1(xλ)=0,g2(xλ)=0,,gnC(xλ)=0 (9)

where nC is some positive integer (which we take to be zero when there are no additional constraints). We write simply g(xλ)=0, where g:R+nSRnC is a differentiable mapping whose components are the gi's. Some or all gi might be linear functions, representing moities or stochiometric constraints, but non‐linear constraints will be useful when treating certain examples, as will be discussed later.

Let us denote by

ξλ:=dxλdλRnS×1

the derivative of the vector function xλ with respect to λ, viewed as a function ΛRnS×1.

We are interested in answering the following question:

what are the signs of the entries ofξλ?

Obviously, the answer to this question will, typically, depend on the chosen value of λ. The computation of the steady state xλ as a function of λ will ordinarily involve the numerical approximate solution of non‐linear algebraic equations, or simulation of differential equations, and has to be repeated for each individual parameter λ. Our aim is, instead, to provide conditions that allow one to put constraints on these signs independently of the specific λ, and even independently of other parameters that might appear in the specification of f and of g, such as kinetic constants, and to do so using only linear algebraic and logical operations, with no recourse to numerical approximations.

Proceeding in complete generality, we take the derivative with respect to λ in (8), so that, by the chain rule, we have that f(xλ)ξλ=0, where f(x) denotes the Jacobian matrix of f evaluated at a state x. In other words,

ξλN(f(xλ)) (10)

where N(f(x)) denotes the nullspace of the matrix f(x). Similarly, we have that

ξλN(g(xλ)) (11)

The reason for introducing separately f and g will become apparent later: we will be asking that each of the nC×nS entries of the Jacobian matrix of g should not change sign over the state space (which happens, in particular, when g is linear, as is the case with stoichiometric constraints). No similar requirement will be made of f, but instead, we will study the special case in which f represents the dynamics of a CRN.

2.1 Notations for signs of vectors and of subspaces

We use the following sign notations. For any (row or column) vector u with real entries, the vector of signs of entries of u, denoted sign u, is the (row or column) vector with entries in the set {1,0,1} whose i th coordinate satisfies:

(signu)i=1,ifui<01,ifui>00,ifui=0

(The function sign is sometimes called the “signature function” when viewed as a map Rm{1,0,1}n.) More generally, for any subspace W of vectors with real entries, we define

signW={signv|vW}

Computing signW amounts to determining which orthants are intersected by W. This combinatorial problem is studied in the theory of oriented matroids: given a basis of W, the signs of W represent the oriented matroid associated to a matrix that lists the basis as its columns, which is the set of “covectors” of this basis. See [24] for details and further theoretical discussion.

We also introduce the positive and negative parts of a vector u, denoted by u+ and u respectively, as follows:

(u+)i=ui,ifui>00,ifui0(u)i=ui,ifui<00,ifui0

Note that u=u+u, signu=signu+signu, and:

(signu)+=sign(u+),(signu)=sign(u) (12)

Suppose that uR1×n and vRn×1, for some positive integer n. The equality:

sign(uv)=sign(sign(u)sign(v)) (13)

need not hold for arbitrary vectors, for example, if u=(1,1/4,1/4,1/4) and v=(1,1,1,1)T then sign(uv)=sign(1/4)=1, but, on the other hand,

sign(sign(u)sign(v))=sign((1,1,1,1)(1,1,1,1)T)=sign(2)=1,

which is not equal to sign(uv). However, equality (13) is true provided that we assume that (a) u=0 or u+=0 (i.e. either ui0 for all i, or ui0 for all i, respectively), and also that (b) v=0 or v+=0. This is proved as follows. Take first the case u=0 and v=0. Each term in the sum uv=i=1nuivi is non‐negative. Thus, uv>0, that is, sign(uv)=1, if and only if ui>0 and vi>0 for some common index i, and uv=sign(uv)=0 otherwise. Similarly, as sign(u)sign(v)=i=1nsign(ui)sign(vi), we know that sign(u)sign(v)>0, that is,

sign(sign(u)sign(v))=1,

if and only if sign(ui)=sign(vi)=1 for some i, and sign(u)sign(v)=0 otherwise. But, sign(ui)=sign(vi)=1 is the same as ui>0 and vi>0. Thus (13) is true. The case u+=0 and v=0 can be reduced to u=0 and v=0 by considering u instead of u : sign(uv)=sign((u)v)=sign(sign(u)sign(v))=sign(sign(u)sign(v)). Similarly for the remaining two cases.

2.2 A parameter‐dependent constraint set

Denoting

W(xλ)=N(f(xλ))N(g(xλ))

we have that (10) and (11) implies, in terms of the sign notations just introduced:

πλ:=signξλsignW(xλ)

Therefore, one could in principle determine the possible values of πλ once that W(xλ) is known. However, in applications one typically does not know explicitly the curve xλ, which makes the problem difficult because the subspace W(xλ) depends on λ, and even computing the steady states xλ is a hard problem. As discussed below, for the special case of ODE systems arising from CRNs, a more systematic procedure is possible. Before turning to CRNs, however, we discuss general facts true for all systems.

For every positive concentration vector x define:

Σf(x):={sign(νf(x))|νR1×nS} (14)
Σg(x):={sign(νg(x))|νR1×nS} (15)
Σ(x):=Σf(x)Σg(x){1,0,1}1×nS (16)

The row vectors ν are used in order to generate arbitrary linear combinations of the rows of the Jacobian matrices of f and g, a set rich enough to, ideally, permit the unique determination of the sign of ξλ.

Since at a steady state x=xλ, f(xλ)ξλ=0 and g(xλ)ξλ=0, we also have that:

vξλ=0 (17)

for every linear combination v=νf(xλ) and v=νg(xλ).

We now prove an easy yet key result, which shows that the sign vectors in the set Σ(xλ) strongly constrain the possible signs πλ=signξλ=signdxλdλ. For simplicity in notations, we drop λ in πλ and in ξλ when λ is clear from the context, and write simply π or ξ, with coordinates πi and ξi, respectively.

To state the result, we use formal logic notations. Let pσ,π and qσ,π be the following logical disjunctions:

pσ,π=iσiπi>0qσ,π=jσjπj<0

Recall that the “XNOR(p, q)” binary function has value “true” if and only if p and q are simultaneously true or simultaneously false. Consider the following statement, for any given λΛ, and with π=πλ :

XNOR(pσ,π,qσ,π)σΣ(xλ) (18)

This statement is true if and only if for every σΣ(xλ) it holds that either:

iσiπi=0 (19)

or:

(iσiπi>0)and(jσjπj<0) (20)

(where i and j range over {1,,nS} in all quantifiers). In other words, either all the coordinates of the vector

(σ1π1,σ2π2,,σnSπnS)

are zero, or the vector must have both positive and negative entries.

For any λΛ, let π=πλ. Then (18) is true.

Pick σ=signvΣ(xλ), π=πλ, ξ=ξλ. Suppose that (19) is false. Then, either there is some i such that σiπi>0 or there is some j such that σjπj<0. If σiπi>0 for some i, then also viξi>0. As (17) holds, i=1nSviξi=0, so that there must exist some other index j for which vjξj<0, which means that σjπj<0. Similarly, if there is some j such that σjπj<0, necessarily there is some i such that σiπi>0, by the same argument. □

In terms of the original data, Lemma 1 can be rephrased as follows. For each parameter value λΛ, and each vector νR1×nS, either signνfxisigndxiλdλ=0 for all i{1,,nS} or there are both positive and negative numbers in this sequence; and similarly for the partial derivatives of g.

The condition (18) given in Lemma 1 is only necessary, not sufficient. It may well be the case that there are sensitivity signs that pass this test, yet are not realisable for a given set of kinetic constants. In our experience, however, and as shown by the worked out examples, (18) is enough to provide a minimal set of signs, and is tight in that sense.

Given any two sign vectors σ, π, testing property (18) is simple in any programming language. For example, in MATLAB® syntax, one may write:

ζ=σ.πp=sign(sum(ζ>0))q=sign(sum(ζ<0))XNOR=sign(pq+(1p)(1q))

and the variable XNOR will have value 1 if XNOR(pσ,π,qσ,π) is true and value 0 otherwise.

The basis of our approach will be as follows. We will show how to obtain a state‐independent set Σ0 which is a subset of Σ(x) for all states x. In particular, for all steady states xλ, we will have:

Σ0λΛΣ(xλ) (21)

Compared to the individual sets Σ(xλ), which depend on the particular steady state xλ, the elements of this subset are obtained using only linear algebraic operations; the computation of Σ0 does not entail solving non‐linear equations nor simulating differential equations. Since Σ0Σ(xλ) for all xλ, it follows that

XNOR(pσ,π,qσ,π)σΣ(xλ)XNOR(pσ,π,qσ,π)σT

for any subset TΣ0. Thus, we have:

ForeveryλΛ,πλP=πσTXNOR(pσ,π,qσ,π)istrue (22)

because of Lemma 1. We will construct such subsets T in our procedure, and test, for each potential sign vector π, whether the “orthogonality” property XNOR(pσ,π,qσ,π) is true or not, with respect to elements of T. Our procedure will provide the set P. Often, our construction of T leads to a P that has just three elements, P={0,π,π}. (Note that π=0 is always a solution, and solutions always appear in pairs, since νξ=0 implies ν(ξ)=0.)

To generate P, we carry out a sieve procedure (for moderate number of species, this is easy and fast): we test for each π if the conjunction in (22) is true; if the test fails, the sign vector π is eliminated from the list. The surviving π's are the possible sign vectors. Of course, since the conjunction in (22) is only a necessary, and not a sufficient, condition, we are not guaranteed to find a minimal set of signs. Observe that even though questions about the set P are decidable using propositional logic (there are a finite number of possible sign vectors), they have high computational complexity; for example, asking whether card(P)=3 is NP‐hard on the number of species. Good heuristics for CNF problems include the Davis–Putnam–Logemann–Loveland (DPLL) algorithm [25]. The high computational complexity of these problems means that, generally speaking, our approach will only work well for relatively small networks.

The key issue, then, is to find a way to explicitly generate a state‐independent subset Σ0 of Σ(xλ), and we turn to that problem next.

2.3 Sketch of idea

To provide some intuition, let us consider, for the motivating example, the differential equation for e, which takes the form:

e˙=k1m0e+k2a+k3a

for some positive constants k1, k2, and k3. Along a curve of steady states, we must have

k1m0(λ)e(λ)+k2a(λ)+k3a(λ)0

and therefore, taking derivatives with respect to λ,

k1e(λ)m0(λ)k1m0(λ)e(λ)+(k2+k3)a(λ)0 (23)

Since e(λ)>0 and m0(λ)>0, this means that the following triplets of signs for m0, e, and a :

(1,1,1),(1,1,1)

can never appear, since they would lead to a contradiction, namely a strictly positive and a strictly negative left‐hand side, respectively, in (23).

We were able to derive this conclusion because the signs of the coefficients of m0, e, and a are uniquely determined independently of the value of λ. That fact, in turn, follows from the fact that the gradient of the function (m0,e,a)k1m0e+(k2+k3)a (which appears in the right‐hand side of the differential equation) has a constant sign. In contrast, if we had, for example, a differential equation like

x˙1=k1x1x2+k2x2x3

then we would derive, arguing in the same manner, the constraint

k1x2x1(λ)+(k2x3k1x1)x2(λ)+k2x2x3(λ)0

and here the sign of the coefficient of x2(λ), k2x3(λ)k1x1(λ), cannot be determined unless the values of x1(λ) and x3(λ) are known.

In general, additional information can be obtained by using linear combinations of right‐hand sides. For example, still for the same example, consider the equation:

m˙0=k1m0e+k2a+k4b

Arguing as earlier, this leads to the identity

k1e(λ)m0(λ)k1m0(λ)e(λ)+k2a(λ)+k4b(λ)0 (24)

Subtracting (24) from (23), we have that

k3a(λ)k4b(λ)0

from which we conclude that a(λ) and b(λ) must have the same sign. Thus, we may obtain more information by taking linear combinations, but, again, we must check that the obtained coefficients have constant sign (which, in this case, is clear because k3 and k4 are constants). Our procedure is based on identifying such constant‐sign linear combinations, using only information from stoichiometry.

3 Sensitivities for CRNs

From now on, we assume that we have a system of differential equations associated to a chemical reaction network:

dxdt=f(x)=ΓR(x) (25)

(see Appendix). Observe that f(x)=ΓR(x), where R(x) is the Jacobian matrix of R, which is the matrix whose (k, j)th entry is Rkxj(x).

We will assume from now on also specified a differentiable mapping

g:R+nSRnC

where nC is some positive integer (possibly zero, to indicate the case where there are no additional constraints), and g has the property that

allnC×nSentriesoftheJacobiang(x)haveconstantsign (26)

In other words, the gradients gi(x) of the components {gi,i=1,,nC} of g, must have signs that do not depend on the state x.

We use g in order to incorporate, in particular, stoichiometric conservation laws, which are linear functions, and thus have constant gradients and therefore gradients whose signs do not depend on x. Recall that stoichiometric constraints are obtained from the matrix Γ as follows: one considers the vectors in the left nullspace of Γ, that is, the row vectors ρR1×nS such that ρΓ=0. The linear functions xρx are called conserved moities or stochiometric constraints; the time derivative of ρx(t) is constant along solutions of (25), since d(ρx)dt=ρΓR(x)=0. Without loss of generality, one may take the vector ρ to have rational components or (clearing denominators) integer components, because the matrix Γ is rational. We emphasise that we do not include as components of g all stoichiometric constraints, or even all elements of a basis of the left nullspace of Γ. Indeed, in most examples of chemical reaction networks, this would lead to a unique steady state, or at most a discrete set of states. Our objective is precisely to study how steady states vary when one parameter varies, and hence a continuum of steady states is of interest.

The example in the introduction, for example, has five independent constraints, and one may show (see Appendix) that when all constraints are imposed, the steady state (given a specified set of kinetic reaction parameters) is unique. However, if, for instance, we keep GT,FT,MT, NT fixed but not impose a constant value on ET, a continuum of steady states exists, as ET is allowed to vary.

Observe that a non‐linear function g may sometimes also have the constant sign property. For example, suppose that nS=5, nC=1, and

g(x)=k1x1x3k2x22

where k 1 and k 2 are positive constants. Then the Jacobian matrix (gradient, since nC=1) is:

g(x)=g(x)=(k1x3,2k2x2,k1x1,0,0)

which has constant sign (1,1,1,0,0).

For chemical reaction networks, it is not necessary for the entries of f(x), and much less the entries of the products νf(x) for vectors ν, to have constant sign. Our next task will be to introduce algebraic conditions that allow one to check if the sign is constant, for any given vector ν.

Before proceeding, however, we give an example of non‐constant sign. Take the following CRN, with nS=4 and nR=2 :

R1:X1+X2X4,R2:X2+X3X1 (27)

which is formally specified, assuming mass‐action kinetics, as follows:

A=10110100,B=01000010,Γ=11110110
R(x)=(k1x1x2,k2x2x3)T

Thus the ODE set x˙=f(x)=ΓR(x) corresponding to this CRN has:

f(x)=k1x1x2+k2x2x3k1x1x2k2x2x3k2x2x3k1x1x2

Let ν=e1T, where, in general ei is the canonical row vector (0,,0,1,0,,0) with a “1” in the i th position and zeroes elsewhere. Observe that νf(x)=(k1x2,k1x1+k2x3,k2x2,0) does not have constant sign, because its second entry, which is the same as the (1, 2) entry of f(x), is the function k1x1+k2x3, which changes sign depending on whether x1>k2x3/k1 or x1<k2x3/k1. Ruling out vectors ν that lead to such ambiguous signs is the purpose of our algorithm to be described next.

3.1 A first space

Introduce the following space:

V:=rowspanofΓ={νΓ|νR1×nS}R1×nR

Since f(x)=ΓR(x), the definition (14) of Σf(x) becomes:

Σf(x):={sign(vR(x))|vV}

when specialised to CRN. Later on, we will explain how Property (26) allows us to obtain sign vectors induced by g (x) that are independent of x. On the other hand, the sign vectors σ=signvR(x) generally depend on the particular x. The following lemma shows that, for vectors ρ with non‐negative entries, the sign of the vector ρR(x) is the same, no matter what the state x is, and moreover, this sign can be explicitly computed using only stoichiometry information. We denote by

Aj=(aj1,,ajnR)TRnR×1

the j th column of the transpose AT, i.e.. the transpose of the j th row of A.

For any positive concentration vector x, any non‐negative row vector ρ of size nR, and any species index j{1,,nS} :

ρAj=0ρRxj(x)=0 (28)

Thus, also

ρAj>0ρRxj(x)>0 (29)

since the expressions in each side of (28) can only be zero or positive.

We have that

ρAj=kKρρkajk

where Kρ:={k|ρk>0}. Since every ajk0, the equality ρAj=0 holds if and only if ajk=0 for all kKρ. Similarly, from

ρRxj(x)=kKρρkRkxj(x)

and Rkxj(x)0 we have that ρRxj(x)=0 if and only if Rkxj(x)=0 for all kKρ. From (47), in the Appendix on CRN, we conclude (28). □

Lemma 2 is valid for all non‐negative ρ. When specialised to v=νΓV, and defining σ=signvR(x), it says that σ does not depend on x. However, elements of the form v=νΓV will generally not be non‐negative (nor non‐positive), so the lemma cannot be applied to them. Instead, we will apply Lemma 1 to the positive and negative parts of such a vector, but only when such positive and negative parts satisfy a certain “orthogonality” property, as defined by the subset of V introduced below.

3.2 A state‐independent subset of Σ

For any vV, consider the sign vector μ~v:=signvAT{1,0,1}1×nS, whose j th entry is vAj=νΓAj if v=νΓ with νR1×nS, as well as the positive and negative parts of v, v+ and v. Define the following set of vectors (“G ” for “good”):

VG:={vV|foreachj{1,,nS}eitherv+Aj=0orvAj=0}

Observe that, if vVG, then, from vAj=(v+v)Aj=v+AjvAj, it follows that

vAj=v+Aj,ifvAj=0vAj,ifv+Aj=00,ifv+Aj=vAj=0 (30)

Consider the following set of sign vectors μ~v parametrised by elements of VG :

Σ~0:={μ~v=sign(vAT)|vVG}{1,0,1}1×nS (31)

The key fact is that this is a subset of Σ(x), for all x :

For every positive concentration vector x,

Σ~0Σ(x)

A proof is provided in Section 6.

To interpret the set VG, it is helpful to study the special case in which v is simply a row of Γ, that is, v=νΓ and ν=eiT. Since

eiTBeiTA=eiT(BA)=eiTΓ=v+v

and the vectors eiTB and eiTA have non‐overlapping positive entries (by the non‐autocatalysis assumption), we have that v+=eiTB and v=eiTA. Since eiTBAj=kbikajk, asking that this number be positive amounts to asking that

iisaproductofareactionRkwhichhasjasareactant (32)

Since eiTAAj=kaikajk, asking that this number is positive amounts to asking that

iandjarebothreactantsinsomereactionRk (33)

Thus, if the network in question has the property that (32) and (33) cannot both hold simultaneously for any pair of species i, j, then we cannot have that both eiTBAj>0 and eiTAAj>0 hold. In other words, eiTVG for all i.

As an illustration, take the CRN R1:X1+X2X4 and R2:X2+X3X1 treated in (27). We claim that e1TVG, which reflects the fact that e1Tf(x) does not have constant sign. Indeed, in this case we have that, with i=1 and j=2, X1 and X2 are reactants in R1 but X1 is also a product of reaction R2, which has X2 as a reactant. Algebraically, e1TΓ=(1,1)=(0,1)(1,0)=v+v and A2=(1,1)T, so v+A2=1 and vA2=1. This means that ν=e1TVG, since the property defining VG would require that at least one of v+A2 or vA2 should vanish. We have re‐derived, in a purely algebraic manner, the fact that k1x1+k2x3 changes sign.

Testing whether a given vector vV, v=νΓ with νR1×nS, belongs to VG is easy to do. For example, in MATLAB® ‐like syntax, one may write:

v=νΓv+=(v>0).vv=(v<0).vvA+=sign(v+A)vA=sign(vA)

and we need to verify that the vectors vA+ and vA have disjoint supports, which can be done with the command

sum(vA+.vA)==0

which returns 1 (true) if and only if vVG, in which case we accept v and we may use σ=sign(vAT) to test the conditions in Lemma 1.

3.3 Explicit generation of elements of Σ~0

The set Σ~0 defined in (31) is constructed in such a way as to be independent of states x, which makes it more useful than the sets Σ(x) from a computational standpoint. Yet, in principle, computing this set potentially involves the testing of the conditions “ v+Aj=0 or vAj=0 ” that define the set VG, for every v=νΓ, that is, for every possible real‐valued vector νR1×nS (and each j). We describe next a more combinatorial way to generate the elements of Σ~0.

We introduce the set of signs associated to the row span V of Γ :

S:=signV{1,0,1}1×nR (34)

Denote:

α:=signAT{0,1}nR×nS

so that the j th column of α is αj=signAj{0,1}nR×1.

Pick any sS, s=signv, where vV. Then, for each j{1,,nS} :

sign(v+Aj)=sign(s+αj),sign(vAj)=sign(sαj)

By (13), applied with u=v+ and v=Aj, sign(v+Aj)=sign(sign(v+)αj). By (13) applied with u=v and v=Aj, sign(vAj)=sign(sign(v)αj). Since, by (12) applied with u=v, s+=sign(v+) and s=sign(v), the conclusion follows.

In analogy to the definition of the set VG, we define (“G ” for “good”):

SG:={sS|foreachj{1,,nS}eithers+αj=0orsαj=0}

Observe that, if sSG, then, since sαj=(s+s)αj=s+ajsaj,

sαj=s+αj,ifsαj=0sαj,ifs+αj=00,ifs+αj=sαj=0 (35)

Consider the following set of sign vectors parametrised by elements of SG :

Σ0:={μs=sign(sα)|sSG}{1,0,1}1×nS (36)

Pick any sS, s=signv, where vV. Then

sSGifandonlyifvVG

and for such s and v,

sign(vAT)=sign(sα) (37)

A proof is provided in Section 6.

Σ~0=Σ0.

Pick any element of Σ~0, μ~v=sign(vAT), vVG. By Corollary 1, s=signvSG. Moreover, also by Corollary 1, μ~v=sign(sα), so we know that μ~vΣ0. Conversely, take an element μsΣ0. This means that μs=sign(sα) for some sSGS=signV. Let vV be such that s=signv. By Corollary 1, vVG, and also μs=sign(vAT). By definition of Σ~0, this means that μsΣ~0. □

We can simplify the definition of Σ0 a bit further, by noticing that the finite subset S can be in fact be generated using only integer vectors. The definition in (34) says that:

S={sign(νΓ)|νR1×nS}{1,0,1}1×nR
S={sign(νΓ)|νZ1×nS}{1,0,1}1×nR

A proof is provided in Section 6.

3.4 Adding rows to g by linear combinations of linear components

Recall that we made the assumption [Property (26)], that the nC components of g have gradients of constant sign. This means that the elements in following subset of Σg(x), for all x :

Σ1g:={sign(eiTg(x))|i{1,,nC}} (38)

where eiT denotes the canonical row vector (0,,0,1,0,,0) with a “ 1 ” in the i th position and zeroes elsewhere, have constant sign, independently of the particular state x. We will also consider the following subset of Σg(x), for all x :

Σ2g:={sign(νg(x))|νRI1×nS} (39)

where I{1,,nC} denotes the set of indices of rows of g that are linear functions, and νRI1×nS means that ν is supported in I, that is, νj=0 whenever jI. Since a linear combination of linear functions is again linear, the elements of Σ2g also have constant sign. Thus, we will only use elements of Σ1gΣ2g in our procedure, instead of arbitrary elements of Σg(x). As part of our algorithm, we add selected combinations of such constraints as new components of g – ideally the whole sign space of the span of the rows, but in practice just a few sparse linear combinations suffice. If the coefficients of these linear functions are rational numbers (as is the case with coordinates of g that represent stoichiometric constraints), we may, without loss of generality, take integer combinations, as justified in the same manner as Lemma 5.

Let us explain, through an example, why this procedure is necessary. Suppose that the following are two rows of g

g1=x1+x2+x3c1g2=2x2+x3c2

which might represent the conservation of two quantities. If xλ is a curve of steady states, and denoting derivatives with respect to λ by primes, we have therefore that

x1(λ)+x2(λ)+x3(λ)=0and2x2(λ)+x3(λ)=0

The first of these tells us that the sign vector s=πλ=(s1,s2,s3) is either zero or must have two components of opposite signs, and the second one implies that s2=s3. The conjunction of these two constraints gives the following set of possible signs:

{(0,0,0),(1,1,1),(0,1,1),(1,1,1)}

(and the negatives of the last three). However, notice that, if we add to the rows of g also the difference g3=g1g2=x1x2c1+c2, then we also know that x1(λ)x2(λ)=0, so that, in fact, we should also have that s1=s2. Adding this constraint serves to eliminate the last two possibilities (as well as their negatives), giving the unique non‐zero solution (1,1,1) [and its negative (1,1,1)]. Thus, adding the linear combination g3, even if it is redundant from a purely linear‐algebraic point of view, provides additional information when looking for signs.

3.5 Addition of “virtual constraints” to g

We have also found, when working out examples, that the following heuristic is useful. Consider the set I consisting of all state‐dependent linear combinations

h(x)=i=1nSri(x)ΓiR(x)

of the rows of the right‐hand side of the dynamics (25), where Γi denotes the i th row of Γ, and the ri's are scalar functions. In abstract algebra terminology, when the reactions Ri's are polynomials (as with mass action kinetics), and if we restrict to polynomial coefficients ri, then I is the ideal generated by the functions ΓiR. Take any hI, and a parametrised set of positive steady states xλ. Since ΓR(xλ)=0, it follows that also h(xλ)=0 for every λΛ. Now, suppose that one is able to find a function h of this form with the property that h(x)=m(x)g(x), where m (x) is a monomial and g (x) has a gradient of constant sign. Then g(xλ)=0 for every λΛ, because m(x)0 at all positive x. This means that we may add g to the set of constraints. We call a function g of this form a “virtual constraint.”

Testing for the existence of such elements is in principle a difficult computational algebra problem. However, in many or even most natural examples of CRN's, the reaction functions Ri are either linear or quadratic. If we consider only linear functions ri, then the combination elements h obtained by the above construction are at most polynomials of order three. Suppose that we look for factorisations of the form h(x)=xig(x), where g is a polynomial of order at most two, and is so that the monomials in g all involve different variables. Such a g has constant‐sign gradient (because g(x)'s coordinates are all either constants or single variables xi). Testing for such a factorisation, for each fixed variable xi as “ m(x) ” and any fixed group of monomials for g, becomes a linear algebraic problem on the coefficients of the functions ri. We do not discuss this further in general, but only mention an example which will be useful when analysing a particular network below.

Suppose that some two rows of f=ΓR(x) are as follows:

f1(x)=k1x0y1k1x1y0f2(x)=k2x1y1k2x2y0

where we are denoting the coordinates of x as (x0,x1,x2,y0,y1) for reasons that will be clear when we discuss the network where this example appears. Taking r1(x)=k2x2 and r2(x)=k1x1, we have that

h(x)=k2x2f1(x)k1x1f2(x)=m(x)g(x)

where m(x)=y1 is a monomial, and:

g(x)=k1k2x0x2k1k2x12

has the gradient:

g(x)=g(x)=(k1k2x2,2k1k2x1,k1k2x0,0,0)

which has constant sign (1,1,1,0,0).

3.6 Remarks on global properties

We do not directly address in this study the issue of uniqueness of steady states in each stoichiometry class. In those examples in which the space of fixed conservation laws has codimension one, as in our example when we fix all except one of the values ET and so on, it is possible in principle that for each value of the remaining conserved quantity there may exist several equilibria. This is a well‐studied question for CRNs, see for instance [2635]. A routine argument on CRNs can be used to prove that for our motivating example (1), steady states are unique once that all conservation laws are taken into account (see Appendix).

However, in this work our concern has been with the determination of signs of sensitivities, and not their actual values. These are different questions. Indeed, signs might be unique even when values are not: different steady states may well “move” in the same direction upon a perturbation of parameters. For a completely trivial illustration, take any one‐dimensional (1D) differential equation x˙=f(x). Even if f has multiple roots, leading to multiple steady states, N(f(x)) is either equal to {0} or R at each steady state. This means that the signs of the elements in N(f(x)) are unique (zero in the first case) or, at worst, unique up to sign reversals (in the second case). Note that any f which has the property that f(0)0 arises from some CRN, f=ΓR(x). Indeed, a representing CRN for f(x)=i=0naixi, with a00, can be obtained as follows. For i=0, we include a reaction 0X with rate constant a0. For i>0 and ai0, we introduce a reaction iX0 with rate constant ai/i. For i>0 and ai>0, we introduce a reaction iX(i+1)X with rate constant ai. Then Γ=[1,γ1,,γn], with γi=i if ki0 and γi=1 if ki0, and R(x)=(k0,k1,,kn)T with ki=(ai/i)xi if ai0 and ki=aixi if ai0. (This network has autocatalytic reactions, but adding additional species turns it into one that does not.)

Another example is given by the 2D system that has vector field f(x)=((x1x2)(x12x2)(x13x2),0)T. At steady states of the form (x2,x2), (2x2,x2), and (3x2,x2), the first row of the Jacobian matrix f is (ax22,bx22) (and the second row is zero), where a=2,1,2 and b=2,2,6, respectively. Thus, the nullspace N(f(xλ))={(u1,u2)R2|au1+bu2=0} is the span of (1,1), (2,1), or (3,1). These are three different subspaces, yet they all have the common sign (1,1) (plus its negative, and zero). In summary, even though the tangent vectors are not unique, in this example signs are.

Suppose that signs of sensitivities are unique up to sign reversals and zero, that is, for some π{1,0,1}nS×1 and all parameter values λΛ, πλ{π,π,0}. Then a global result along any smooth non‐singular (ξλ0 for all λ) curve connecting steady states follows as a corollary. In other words, the conclusion from infinitesimal perturbations extends to global perturbations. Indeed, suppose that we want to compare the values of the steady‐state concentrations xλ1 and xλ2 at two parameter values λ1,λ2. We have:

sign(xλ2xλ1)=signλ1λ2ξλdλ=±π

the sign depending on whether πλ=π or πλ=π for all λ (no change of sign is possible, by nonsingularity).

4 Summary and implementations

Our procedure for finding the set P in (22), which contains all possible signs πλ of derivatives ξλ, consists of the following steps:

  1. Construct a subset SS (see below).

  2. For each element sS, test the property (s+αj)(sαj)=0, which defines SG. The s ′s that pass this test are collected into a set SG, which is known to be a subset of SG.

  3. Take the set of elements of the form μs=sign(sα), for s in SG, and add to these the signs of the rows of the Jacobian g of g, as well as a subset of combinations of linear components of g (by assumption, these sign vectors are independent of x). Let us call this set T.

  4. Optionally, add to T sign vectors from “virtual constraints” as explained earlier.

  5. Now apply the sieve procedure, testing the conjunction in (22). The elements π that pass this test are reported as possible signs of derivatives of steady states with respect to the parameter λ, in the sense that they have not been eliminated. These are the elements of P.

  6. If a unique (after eliminating 0 as well as one element of each pair {π,π}) solution remains, we stop. If there is more than one sign that passed all tests, and if S was a proper subset of S, we may generate a larger set S, and hence a potentially larger T, and repeat the subsequent steps for the larger subset.

The theory guarantees that our procedure will eliminate all impossible sign vectors, thus providing a set P of possible sign vectors. As is typically the case with heuristics for computationally intractable problems, there is no a priori guarantee that the set P obtained by steps 1–5 should be a minimal such set, and this is why step 6 is included for further search.

The first step, constructing S, or a large subset S of it, can be done in various ways. Since, by Lemma 5, we can generate S using integer vectors, the elements of S have the form signv where we may assume, without loss of generality, that each entry of v=νΓ is either zero or, if non‐zero, is either 1 or 1. Thus, testing whether a sign vector s belongs to S amounts to testing the feasibility of a linear program (LP): we need that νΓei=0 for those indices i for which si=0, that νΓei1 for those indices i for which si=1, and that νΓei1 for those indices i for which si=1. (These are closed, not strict, conditions, as needed for an LP formulation.) This means that one can check each of the 3n possible sign vectors efficiently.

One can combine the testing of LP feasibility with the search over the 3n possible sign vectors into a mixed integer linear programming (MILP) formulation, by means of the technique called in the MILP field a “big M” approximation [36]. This is a routine reduction: one first fixes a large positive number M, and then formulates the following inequalities:

νΓeiMLi+Ui0,νΓeiMUi+Li0,Li+Ui1

where the vector ν is required to be real and the variables Li, Ui binary ({0,1}). Given any solution, we have that MνΓei1 (so s=1) for those i for which (Li,Ui)=(0,1), 1νΓeiM (so s=1) for indices for which (Li,Ui)=(1,0), and νΓei=0 (i.e. si=0) when (Li,Ui)=(0,0). (This trick will miss any solutions for which νΓei1 but M was not taken large enough that MνΓei, or νΓei1 but M was not taken large enough that νΓeiM.) The resulting MILP can be solved using relaxation‐based cutting plane methods, branch and bound approaches, or heuristics such as simulated annealing [37, 38]. Such mixed‐integer techniques have been used for the related but very different problem of parameter identification for biochemical networks, see for instance [39].

Often, however, simply testing sparse integer vectors in the integer‐generating form in Lemma 5 works well. In practice, we find that linear combinations with small coefficients of pairs of canonical basis vectors ν=eiT, and similarly for the appropriate conservation laws, is typically enough to obtain the set of all possible sign vectors π (up to all signs being reversed, and except for the trivial solution π=0).

We have developed a MATLAB® script, “CRNSESI” (Chemical Reaction Network SEnsitivity SIgns) that implements our procedure. The examples given in the next section were worked out using this software. (Actual output from the program is shown in the Supplementary Materials.)

5 Three worked‐out examples

5.1 Kinase cascade

In particular, the example given in the introduction was worked out using CRNSESI. Specifically, we introduced stoichiometric constraints to keep all but one conservation law fixed, and analysed the signs of the resulting sensitivities for any curve, obtaining in each case a unique solution (up to sign reversals or the identically zero solution). The output of CRNSESI, for the concrete example given by reactions (1), can be summarised as shown below. In each case, “−1” or “1” means that the respective component of the state vector changes negatively or positively, respectively, under the corresponding perturbation.

if the first kinase, ET, decreases (keeping GT,FT,MT,NT fixed):

11111111111em0am1gbn0cn1fd

if the first substrate, MT, increases (keeping ET,GT,FT,NT fixed):

11111111111em0am1gbn0cn1fd

if the first phosphatase, GT, increases (keeping ET,FT,MT,NT fixed):

11111111111em0am1gbn0cn1fd

if the second substrate, NT, decreases (keeping ET,GT,FT,MT fixed):

11111111111em0am1gbn0cn1fd

if the second phosphatase, FT, decreases (keeping ET,GT,MT,NT fixed):

11111111111em0am1gbn0cn1fd

If the opposite change is made on a total amount, then the signs get reversed. For example, if the second substrate, NT, increases, then we obtain:

11111111111em0am1gbn0cn1fd

Typically, one is also interested the effect of perturbations on the total concentration of active kinase, free or bound, X=M1+B+C and the total concentration of product, free or bound, Y=N1+D. Experimentally, these quantities are far easier to quantify using Western blots or mass spec techniques [7]. In order to study changes in X and Y, we introduce “virtual” variables x and y and artificial stoichiometric constraints m1+b+cx=0 and n1+dy=0, and re‐apply our algorithm. Results are as follows (using the same sign conventions as above):

if the first kinase, ET, decreases: x,y=1,1

if the first substrate, MT, increases: x,y=1,1

if the first phosphatase, GT, increases: x,y=1,1

if the second substrate, NT, decreases: x,y=1,1

if the second phosphatase, FT, decreases: x,y=1,1

Notice the following remarkable phenomenon: when the total second substrate, NT, is perturbed, we see that x and y, the total amounts of active enzymes, both vary in the same direction. A network identification procedure that employs these experimental perturbations will infer a positive correlation between measured activity of these enzymes. On the other hand, an experiment in which the second phosphatase, FT, is perturbed, will lead to an inference of a graph “repression” edge. Indeed, when decreasing the second phosphatase, a “local” perturbation in the second layer, the total amount of active enzyme y increases, as it should, but the effect on the “upstream” layer quantified by x is negative, which suggests a repression of x by y. These issues, including the apparently paradoxical effect of two different perturbations leading to opposite conclusions, are extensively discussed in [7], which conducted an experimental validation of this idea.

In order to obtain the additional information, about total active kinase X and product Y, we proceeded as follows. We first add two artificial variables, x and y, so that the full state is now (e,m0,a,m1,g,b,n0,c,n1,f,d,x,y). The definitions of x and y are incorporated into two new “stoichiometric constraints” corresponding to these vectors in Σ :

(0,0,0,1,0,1,0,1,0,0,0,1,0),
(0,0,0,0,0,0,0,0,1,0,1,0,1)

respectively. No change is made to the original stoichiometry matrix and original stoichiometric constraints, except for adding zeroes in the positions of x and y. The original algorithm can be run on this extended set. However, when adding artificial variables, such as x and y, which do not participate in reactions nor the original set of stoichiometric constraints, it is more efficient to first obtain solutions for the original problem, in which x and y have not yet been added, and only as a second step to add the “stoichiometric constraints” corresponding to the added variables. This typically results in a substantial savings of computing time. With this modified procedure, we obtained the following results.

ET,GT,FT,MT fixed, so that only the first kinase, ET, is allowed to vary:

1111111111111em0am1gbn0cn1fdxy

ET,GT,FT,NT fixed, so that only the first substrate, MT, is allowed to vary:

1111111111111em0am1gbn0cn1fdxy

ET,FT,MT,NT fixed, so that only the first phosphatase, GT, is allowed to vary:

1111111111111em0am1gbn0cn1fdxy

ET,GT,FT,MT fixed, so that only the second substrate, NT, is allowed to vary:

1111111111111em0am1gbn0cn1fdxy

ET,GT,MT,NT fixed, so that only the second phosphatase, FT, is allowed to vary:

1111111111111em0am1gbn0cn1fdxy

Let us interpret these solutions. Take for example the solution obtained when only the last substrate, NT, was allowed to vary. Both zero and the negative of this sign vector, namely:

1111111111111em0am1gbn0cn1fdxy

are solutions. This negative version is easier to interpret: since the changes in n0,c,n1,d are all positive and, by the definition (6), NT=n0+c+n1+d, these are the signs of changes in steady states when NT is experimentally increased. In this second form of the solution, we can read‐out the changes (positive for x and y, negative for b, and so forth) under such a perturbation.

5.2 A phosphotransfer model

(We thank Domitilla del Vecchio for suggesting that we study this example.) Consider the two reversible reactions

X0+Y1k1k1X1+Y0X1+Y1k2k2X2+Y0

(we display rate constants because they play a role in the virtual constraints described later). This network can be thought to describe a phosphotransferase Y which, when in active (phosphorylated) form Y1 transfers a phosphate group to X0 (and hence becomes inactivated, denoted by Y0, while X0 becomes X1), and which when active can also transfer a second phosphate group to X1 (and hence becomes inactivated, while X1 becomes X2). We write coordinates of states as x=(x0,x1,x2,y0,y1). Two conservation laws are as follows:

x0+x1+x2=XTx1+2x2+y1=PT

representing the conservation of total X and total number of phosphate groups.

Two rows of f=ΓR(x) are f1(x)=k1x0y1k1x1y0 and f2(x)=k2x1y1k2x2y0, so, as discussed earlier, using the virtual constraint obtained from k2x2f1(x)k1x1f2(x), we may add to T the following sign vector:

(1,1,1,0,0)

We ask now what happens if the total amount of kinase, y0+y1=YT, is allowed to vary, but keeping XT and PT constant.

CRNSESI returns this output:

1111x0x1x2y0y1

(all signs could be reversed and that would also be a solution). This means that x0, y0, and y1 change in the same direction, but x2 in the opposite direction, and x1 is undetermined (star). Since y0+y1=YT, an increase in YT means that both y0 and y1 increase, and thus we conclude that x0 increases and x2 decreases when the kinase amount is up‐regulated.

Is the fact that our theory cannot unambiguously predict the actual change in x1 at steady state, under kinase perturbations, a reflection of an incomplete search by our algorithm, or an intrinsic property of this system? To answer this question, we simulated the system, taking for concreteness all parameters ki=1.

First, let us simulate a system in which XT=PT=10 and we study a 10% up‐regulation from YT=1. We start from these the following two initial states:

(1,9,0,0,1)T(1,9,0,0.1,1)T

which correspond to YT=1 and YT=1.1, respectively. The steady states reached from here are as shown in the first and second rows, respectively, of the following matrix:

3.57723.32753.09530.51810.48193.60093.32643.07270.57180.5282

which means that the sign changes are:

11111

consistently with our theoretical prediction.

Next, let us simulate a system in which XT=8, PT=10, and we study a 10% up‐regulation from YT=3, which is achieved by taking these two initial states:

(1,7,0,0,3)T(1,7,0,0.3,3)T

which correspond to YT=3 and YT=3.3, respectively. The steady states reached from here are as shown in the first and second rows, respectively, of the following matrix:

2.45052.66072.88881.43831.56172.51662.66382.81971.60311.6969

which means that the sign changes are now:

11111

again consistently with our theoretical prediction.

These simulations explain why the actual change in x1 at steady state, under kinase perturbations, cannot be unambiguously predicted from our algorithm, which does not take into the numerical values of the conserved quantities (nor, for that matter, of the kinetic constants k i 's). It is remarkable, however, that the sign of the perturbation in the “active” form x2 can be unambiguously predicted (and perhaps counter‐intuitive that the change is negative).

We also run CRNSESI on two other scenarios: (1) keeping XT and YT constant gives these signs:

1111x0x1x2y0y1

and (2) keeping PT and YT constant results in:

1111x0x1x2y0y1

in which case the signs of perturbations in the variable x2 are not uniquely defined.

5.3 A ligand/receptor/antagonist/trap example

(We thank Gilles Gnacadja for suggesting that we try CRNSESI on this example.) The paper [40] studied a system that models the binding of interleukin‐1 (IL‐1) ligand to IL‐1 type I receptor (IL‐1RI), under competitive binding to the same receptor by human IL‐1 receptor antagonist (IL‐1Ra). IL‐1Ra is used as a therapeutic agent in order to block IL‐1 binding (which causes undesirable physiological responses). In addition, the model included the presence of a decoy (or “trap”) receptor that binds to both IL‐1 and IL‐1Ra. A key question addressed in that paper was the determination of how the equilibrium concentration of the receptor–ligand complex depends on initial concentrations of the various players (reflected in variations in stoichiometrically conserved quantities), and specifically the determination of the direction of the changes in concentrations. We show here how CRNSESI recovers conclusions from that paper, which were obtained there through very ingenious and lengthy ad‐hoc computations.

We will employ the same notations as in [40]: the species Xi, i=1,2,3,4 are, respectively, the ligand IL‐1, receptor IL‐1RI, antagonist IL‐1Ra, and trap; and the species Yi, i=1,2,3,4 are, respectively, the complexes X1X2, X2X3, X3X4, and X4X1. Thus, the reaction network is

X1+X2Y1X2+X3Y2X3+X4Y3X4+X1Y4

We use lower case letters to denote concentrations. There are four independent conservation laws:

x1+y4+y1=b1x2+y1+y2=b2x3+y2+y3=b3x4+y3+y4=b4

We will fix b2, b3, and b4, and ask how steady states change in sign when b1 is perturbed. The other cases (perturb b2, etc.) are of course similar.

It is easy to see that αy1y3=βy2y4, for some positive constants α,β, at all steady states, and this allows one to introduce an additional virtual constraint obtained from αy1y3βy2y4, meaning that we may add the following sign vector:

(0,0,0,0,1,1,1,1)

to T. Indeed, four rows of the vector field are: f1=k1x1x21y1, f2=k2x2x32y2, f3=k3x3x43y3, f4=k4x4x14y4 (for appropriate positive constants ki and i). So, at steady states, y1 is a multiple of x1x2, and similarly for the other yi's, which gives that y1y3 and y2y4 are both multiples of x1x2x3x4. Another way to say this is to note that the linear combination

k1k3k4x1x4f2+k1k32y2f4k2k3k4x3x4f1k2k41y1f3

gives

k2k413y1y3k1k324y2y4

With this virtual constraint added, CRNSESI returns

111111x1x2x3x4y1y2y3y4

for the signs of derivatives with respect to b1. Note that two variables are undetermined in sign. (To be more precise, CRNSESI also returns the negatives of these signs. However, since b1=x1+y4+y1, and since all three of x1,y4,y1 change with the same sign, the negative corresponds to the derivative with respect to b1.) This is exactly what is proved in [40] (see the first columns of the matrices in (10) and (12) in that paper). Notably, CRNSESI gave slightly more, namely that these particular signs of (dy2/db1,dy3/db1) can never appear:

(1,1),(1,0),(0,1),(0,0).

In other words, it cannot be the case that both y2 and y3 increase.

6 Some technical proofs

We collect here some of the longer proofs.

6.1 Proof of Lemma 3

Pick any μ~vΣ~0, where vVGV, and fix any positive concentration vector x. We must prove that μ~vΣ(x). As Σ(x) includes all expressions of the form sign(vR(x)), for vV, it will suffice to show that, for this same vector v,

signvRxj(x)=sign(vAj) (40)

for each species index j{1,,nS}. For each j{1,,nR}, we will show the following three statements:

vAj>0(sov+Aj=0)vRxj(x)=vRxj(x)<0 (41)
v+Aj>0(sovAj=0)vRxj(x)=v+Rxj(x)>0 (42)

and

vAj=v+Aj=0vRxj(x)=0 (43)

Suppose first that vAj>0. Applying (28) with ρ=v+, we have that v+Rxj(x)=0. Applying (29) with ρ=v, we have that vRxj(x)>0. Therefore,

vRxj(x)=(v+v)Rxj(x)=v+Rxj(x)vRxj(x)=vRxj(x)<0

thus proving (41). If, instead, vAj=0 and v+Aj>0, a similar argument shows that (42) holds. Finally, suppose that v+Aj=vAj=0. Then, again by (28), applied to ρ=v+ and ρ=v

vRxj(x)=(v+v)Rxj(x)=0

and so (43) holds. The desired equality (40) follows from (41)–(43). Indeed, we consider three cases: (a) vAj<0, (b) vAj>0, and (c) vAj=0. In case (a), (30) shows that vAj=vAj (because the first and third cases would give a non‐negative value), and therefore vAj<0, that is, vAj>0, so (41) gives that vRxj(x) is also negative. In case (b), similarly v+Aj=vAj>0, and so (42) shows (40). Finally, consider case (c), vAj=0. If it were the case that v+Aj is non‐zero, then, since vVG, vAj=0, and therefore (30) gives that vAj=v+Aj>0, a contradiction; similarly, vAj must also be zero. So, (43) gives that vRxj(x)=0 as well. □

6.2 Proof of Proposition 1

Let s=signv, vV, and pick any j{1,,nS}. We claim that s±αj=0 if and only if v±Aj=0. Since j is arbitrary, this shows that sSG if and only if vVG. Indeed, suppose that s+αj=0. By Lemma 4, sign(v+Aj)=sign(s+αj)=0, so v+Aj=0. Conversely, if v+Aj=0 then s+αj=0, for the same reason. Similarly, sαj=0 is equivalent to vAj=0.

Suppose now that sSG and vVG, and pick any j{1,,nS}. Assume that s+αj=0. Since, by (35) and (30), sαj=sαj and vAj=vAj, we have, again by Lemma 4, that

sign(sαj)=sign(sαj)=sign(vAj)=sign(vAj)

If, instead, sαj=0 (and thus vAj=0)

sign(sαj)=sign(s+αj)=sign(v+Aj)=sign(vAj)

As j was arbitrary, and we proved that the j th coordinates of the two vectors in (37) are the same, the vectors must be the same. □

6.3 Proof of Lemma 5

Pick any sS. Thus s=signv, where v=νΓ for some νR1×nS. Consider the set of indices of the coordinates of v that vanish (equivalently, si=0),

I={i{1,nS}|vi=0}.

Suppose that I={i1,,ip}. Let ei denote the canonical column vector (0,0,,1,0,0)T with a “ 1 ” in the i th position and zeroes elsewhere, and introduce the nS×p matrix EI=(ei1,ei2,,eip). The definition of I means that νΓEI=vEI=0 and νΓej=vej=vj0 for all jI. The matrix D=ΓEI has integer, and in particular rational, entries. Thus, the left nullspace of D has a rational basis, that is, there is a set of rational vectors {u1,,uq}, where q is the dimension of this nullspace, such that uiD=0 and uD=0 if and only if u is a linear combination of the u i 's. In particular, since νD=0, there are real numbers r1,,rq such that ν=iriui. Now pick sequences of rational numbers ri(k)ri as k and define ν(k):=iri(k)ui. This sequence converges to ν, and, being combinations of the u i 's, ν(k)D=0 for all k. Let v(k):=ν(k)Γ, so we have that v(k)v as k, and v(k)EI=0 for all k. On the other hand, for each jI, as vej0, for all large enough k, (v(k))j, the j th coordinate of v(k), has the same sign as vj. In conclusion, for large enough k, signv(k)=signv=s. Multiplying the rational vector ν(k) by the least common denominator of its coordinates, the sign does not change, but now we have an integer vector with the same sign. □

7 Acknowledgments

This research was supported in part by NIH Grant 1R01GM100473, ONR Grant N00014‐13‐1‐0074, and AFOSR Grant FA9550‐14‐1‐0060.

9.1 A review of chemical reaction networks terminology

We review here some basic notions about chemical networks. See, for example [41, 42] for more details. We consider a collection of chemical reactions that involves a set of nS “species”:

Si,i{1,2,nS}

The “species” might be ions, atoms, or large molecules, depending on the context. A CRN involving these species is a set of chemical reactions Rk, k{1,2,,nR}, represented symbolically as:

Rk:i=1nSaikSii=1nSbikSi (44)

where the aik and bik are some non‐negative integers that quantify the number of units of species Si consumed, respectively, produced, by reaction Rk. Thus, in reaction 1, a11 units of species S1 combine with a21 units of species S2 and so on, to produce b11 units of species S1, b21 units of species S2 and so on, and similarly for each of the other nR1 reactions. (If there is a reverse reaction to (44), i=1nSaikSii=1nSbikSi with bik=aik and aik=bik, one sometimes summarises both by a reversible arrow i=1nSaikSii=1nSbikSi. However, from a theoretical standpoint, we view each direction as a separate reaction.)

We will assume the following “non‐autocatalysis” condition: no species Si can appear on both sides of the same reaction. With this assumption, either aik=0 or bik=0 for each species Si and each reaction Rk (both are zero if the species in question is neither consumed nor produced), Note that we are not excluding autocatalysis which occurs through one ore more intermediate steps, such as the autocatalysis of S1 in S1+S2S32S1+S4, so this assumption is not as restrictive as it might at first appear.

Suppose that aik>0 for some (i, k); then we say that species Si is a reactant of reaction Rk, and by the non‐autocatalysis assumption, bik=0 for this pair (i, k). If instead bik>0, then we say that species Si is a product of reaction Rk, and again by the non autocatalysis assumption, aik=0 for this pair (i, k).

It is convenient to arrange the a ik 's and b ik 's into two nS×nR matrices A, B, respectively, and introduce the stoichiometry matrix Γ=BA. In other words,

Γ=(γij)ijRnS×nR

is defined by:

γij=bijaij,i=1,,nS,j=1,,nR (45)

The matrix Γ has as many columns as there are reactions. Its k th column shows, for each species (ordered according to their index i), the net “produced–consumed” by reaction Rk. The symbolic information given by the reactions (44) is summarised by the matrix Γ. Observe that γik=aik<0 if Si is a reactant of reaction Rk, and γik=bik>0 if Si is a product of reaction Rk.

To describe how the state of the network evolves over time, one must provide in addition to Γ a rule for the evolution of the vector:

[S1(t)][S2(t)][SnS(t)]

where the notation [Si(t)] means the concentration of the species Si at time t. We will denote the concentration of Si simply as xi(t)=[Si(t)] and let x=(x1,,xnS)T. Observe that only non‐negative concentrations make physical sense. A zero concentration means that a species is not present at all; we will be interested in positive vectors x of concentrations, those for which xi>0 for all i, meaning that all species are present.

Another ingredient that we require is a formula for the actual rate at which the individual reactions take place. We denote by Rk(x) be algebraic form of the k th reaction. We postulate the following two axioms that the reaction rates Rk(x), k=1,,nR must satisfy:

  • for each (i, k) such that species Si is a reactant of Rk, Rkxi(x)>0 for all (positive) concentration vectors x;

  • for each (i, k) such that species Si is not a reactant of Rk, Rkxi(x)=0 for all (positive) concentration vectors x.

These axioms are natural, and are satisfied by every reasonable model, and specifically by mass‐action kinetics, in which the reaction rate is proportional to the product of the concentrations of all the reactants:

Rk(x)=κki=1nSxiaijforallj=1,,nR

The positive coefficients κk are the called reaction, or kinetic, constants. By convention, xiaij=1 when aij=0.

Recall that aik>0 and bik=0 if and only if Si is a reactant of Rk. Therefore the above axioms state that, for every positive x,

Rkxi(x)>0aik>0 (46)

and also

Rkxi(x)=0aik=0 (47)

because the expressions on both sides are either zero or positive.

We arrange reactions into a column vector function R(x)RnR :

R(x):=R1(x)R2(x)RnR(x)

With these conventions, the system of differential equations associated to the CRN is given as in (25), which we repeat here for convenience:

dxdt=f(x)=ΓR(x)

9.2 Existence and uniqueness for steady states in the example

For our motivating example (1), steady states are unique once that all conservation laws are taken into account. Existence of steady states follows from the fact that states evolve in a compact convex set, as argued, for example, in [23] (Supplemental Material). Uniqueness is shown as follows. Steady states satisfy that the right‐hand sides of the differential equations:

e˙=αm0e+βa+χam˙0=αm0e+βa+ϕba˙=αm0eβaχam˙1=χaδm1g+εbγn0m1+ηc+ιcg˙=δm1g+εb+ϕbb˙=δm1gεbϕbn˙0=γn0m1+ηc+λdc˙=γn0m1ηcιcn˙1=ιcφn1f+κdf˙=φn1f+κd+λdd˙=φn1fκdλd

(where α,β, are some positive constants) are set to zero, together with the conservation laws. We argue as follows, using the constraints to first express all variables in terms of e, seen as a parameter, and then pointing out that this forces e to be uniquely determined (“increasing” and “decreasing” functions always means strictly so):

  1. the conservation law for ET gives that a is a decreasing function of e;

  2. substituting a=ETe into e˙=0 and solving for m0 gives that m0 is a decreasing function of e;

  3. from e˙m˙0=0, b is an increasing function of a, and therefore b is a decreasing function of e;

  4. substituting g=GTb into g˙=0 and solving for m1 gives that m1 is an increasing function of b, and thus m1 is a decreasing function of e;

  5. the conservation law for MT gives that c is a decreasing function of m0, a, m1, b, so c is an increasing function of e;

  6. from n˙1f˙=0, d is an increasing function of c, so d is an increasing function of e;

  7. solving c˙=0 for n0 gives that n0 is increasing in c and decreasing in m1, so n0 is an increasing function of e and an increasing function of c, and thus n0 is an increasing function of e;

  8. substituting f=FTd into f˙=0 and solving for n1 gives that n1 is an increasing function of d, so n1 is an increasing function of c, and thus n1 is an increasing function of e.

In conclusion, the sum of concentrations n0+c+n1+d is a strictly increasing function θ(e) of concentration of e. Thus, the constraint NT=θ(e) provides a unique possible value for e. Substituting back, (unique) values are obtained for all other concentrations.

8 References

  • 1. Kholodenko B.N. Kiyatkin A. Bruggeman F. Sontag E.D. Westerhoff H., and Hoek J. ‘Untangling the wires: a novel strategy to trace functional interactions in signalling and gene networks’. Proc. National Academy of Sciences USA, 2002, 99, 12841–12846 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. de la Fuente A. Brazhnik P., and Mendes P.: ‘Linking the genes: inferring quantitative gene networks from microarray data’, Trends Genet., 2002, 18, (8), 395–398. (doi: 10.1016/S0168-9525(02)02692-6) [DOI] [PubMed] [Google Scholar]
  • 3. Gardner T.S. di Bernardo D. Lorenz D., and Collins J.J.: ‘Inferring genetic networks and identifying compound mode of action via expression profiling’, Science, 2003, 301, (5629), pp. 102–105 (doi: 10.1126/science.1081900) [DOI] [PubMed] [Google Scholar]
  • 4. Andrec M. Kholodenko B.N. Levy R.M., and Sontag E.D.: ‘Inference of signalling and gene regulatory networks by steady‐state perturbation experiments: structure and accuracy’, J. Theoret. Biol., 2005, 232, (3), pp. 427–441 (doi: 10.1016/j.jtbi.2004.08.022) [DOI] [PubMed] [Google Scholar]
  • 5. Santos S.D.M. Verveer P.J., and Bastiaens P.I.H.: ‘Growth factor induced MAPK network topology shapes Erk response determining PC‐12 cell fate’, Nat. Cell Biol., 2007, 9, pp. 324–330 (doi: 10.1038/ncb1543) [DOI] [PubMed] [Google Scholar]
  • 6. Kholodenko B. Yaffe M.B., and Kolch W.: ‘Computational approaches for analyzing information flow in biological networks’, Sci. Signal., 2012, 5, (220), re1. (doi: 10.1126/scisignal.2002961) [DOI] [PubMed] [Google Scholar]
  • 7. Prabakaran S. Gunawardena J., and Sontag E.D.: ‘Paradoxical results in perturbation‐based signalling network reconstruction’, Biophys. J., 2014, 106, pp. 2720–2728 (doi: 10.1016/j.bpj.2014.04.031) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Pearson G. Robinson F., and Beers Gibson T. et al.: ‘Mitogen‐activated protein (MAP) kinase pathways: regulation and physiological functions’, Endocr. Rev., April 2001, 22, (2), pp. 153–183 [DOI] [PubMed] [Google Scholar]
  • 9. Huang C.‐Y.F., and Ferrell J.E.: ‘Ultrasensitivity in the mitogen‐activated protein kinase cascade’, Proc. Natl. Acad. Sci. USA, 1996, 93, pp. 10078–10083 (doi: 10.1073/pnas.93.19.10078) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Shaul Y.D., and Seger R.: ‘The MEK/ERK cascade: from signalling specificity to diverse functions’, Biochim. Biophys. Acta, 2007, 1773, (8), pp. 1213–1226 (doi: 10.1016/j.bbamcr.2006.10.005) [DOI] [PubMed] [Google Scholar]
  • 11. Bardwell L. Zou X. Nie Q., and Komarova N.L.: ‘Mathematical models of specificity in cell signalling’, Biophys. J., 2007, 92, (10), pp. 3425–3441 (doi: 10.1529/biophysj.106.090084) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Ledford H.. First‐in‐class cancer drug approved to fight melanoma. http://blogs.nature.com/news/2013/05/first‐in‐class‐cancer‐drug‐approved‐to‐fight‐melanoma.html, May 2013. Nature news blog
  • 13. Willems J.C.: ‘Behaviors, latent variables, and interconnections’, Syst., Control Inf., 1999, 43, (9), pp. 453–464 [Google Scholar]
  • 14. Lauffenburger D.A.: ‘Cell signalling pathways as control modules: complexity for simplicity? Proc. Natl. Acad. Sci. USA, 2000, 97, pp. 5031–5033 (doi: 10.1073/pnas.97.10.5031) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Hartwell L.H. Hopfield J.J. Leibler S., and Murray A.W.: ‘From molecular to modular cell biology’, Nature, 1999, 402, pp. 47–52 (doi: 10.1038/35011540) [DOI] [PubMed] [Google Scholar]
  • 16. Saez‐Rodriguez J. Kremling A., and Gilles E.D.: ‘Dissecting the puzzle of life: modularization of signal transduction networks’, Comput. Chem. Eng., 2005, pp. 619–629 (doi: 10.1016/j.compchemeng.2004.08.035) [DOI] [Google Scholar]
  • 17. Del Vecchio D. Ninfa A.J., and Sontag E.D.: ‘Modular cell biology: Retroactivity and insulation’, Nat. Mol. Syst. Biol., 2008, 4, p. 161 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Kim K.H., and Sauro H.M.: ‘Measuring retroactivity from noise in gene regulatory networks’, Biophys. J., March 2011, 100, (5), pp. 1167–1177 (doi: 10.1016/j.bpj.2010.12.3737) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Alexander R.P. Kim P.M. Emonet T., and Gerstein M.B.: ‘Understanding modularity in molecular networks requires dynamics’, Sci. Signal., 2009, 2, (81), p. 44 (doi: 10.1126/scisignal.281pe44) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Jiang P. Ventura A.C. Sontag E.D. Merajver S.D. Ninfa A.J., and Del Vecchio D.: ‘Load‐induced modulation of signal transduction networks’, Sci. Signal., 2011, 4, (194), ra67 (doi: 10.1126/scisignal.2002152) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Kim Y. Paroush Z. Nairz K. Hafen E. Jimenez G., and Shvartsman S. Y.: ‘Substrate‐dependent control of MAPK phosphorylation in vivo’, Mol. Syst. Biol., 2011, 7, p. 467. (doi: 10.1038/msb.2010.121) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Jayanthi S. Nilgiriwala K.S., and Del Vecchio D.: ‘Retroactivity controls the temporal dynamics of gene transcription’, ACS Synth. Biol., August 2013, 2, (8), pp. 431–441 (doi: 10.1021/sb300098w) [DOI] [PubMed] [Google Scholar]
  • 23. Barton J., and Sontag E.D.: ‘The energy costs of insulators in biochemical networks’, Biophys. J., 2013, 104, pp. 1390–1380 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Björner A. Las Vergnas M. Sturmfels B. White N., and Ziegler G.M., ‘Oriented Matroids’ (Cambridge University Press, 1999, 2nd edn.) [Google Scholar]
  • 25. Harrison J., ‘Handbook of practical logic and automated reasoning’ (Cambridge University Press, New York, NY, USA, 2009, 1st edn.) [Google Scholar]
  • 26. Feinberg M.: ‘Chemical reaction network structure and the stability of complex isothermal reactors – i. the deficiency zero and deficiency one theorems’, Chem. Eng. Sci., 1987, 42, pp. 2229–2268. (doi: 10.1016/0009-2509(87)80099-4) [DOI] [Google Scholar]
  • 27. Feinberg M.: ‘The existence and uniqueness of steady states for a class of chemical reaction networks’, Archive for Rational Mechanics and Analysis, 1995, 132, pp. 311–370. (doi: 10.1007/BF00375614) [DOI] [Google Scholar]
  • 28. Conradi C. Saez‐Rodriguez J. Gilles E.‐D., and Raisch J.: ‘Using chemical reaction network theory to discard a kinetic mechanism hypothesis’, IEE Proc. Systems Biology, 2005, 152, pp. 243–248 (doi: 10.1049/ip-syb:20050045) [DOI] [PubMed] [Google Scholar]
  • 29. Sontag E.D.: ‘Structure and stability of certain chemical networks and applications to the kinetic proofreading model of T‐cell receptor signal transduction’, IEEE Trans. Autom.. Control, 2001, 46, (7), pp. 1028–1047 (doi: 10.1109/9.935056) [DOI] [Google Scholar]
  • 30. Craciun G, and Feinberg M: ‘Multiple equilibria in complex chemical reaction networks: extensions to entrapped species models’, IEE Proc Syst. Biol, 2006, 153, (4), pp. 179–186 (doi: 10.1049/ip-syb:20050093) [DOI] [PubMed] [Google Scholar]
  • 31. Craciun G. Tang Y., and Feinberg M.: ‘Understanding bistability in complex enzyme‐driven reaction networks’, PNAS, 2006, 103, (23), pp. 8697–8702 (doi: 10.1073/pnas.0602767103) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Wang L., and Sontag E.D.: ‘On the number of steady states in a multiple futile cycle’, J. Math. Biol., 2008, 57, pp. 29–52 (doi: 10.1007/s00285-007-0145-z) [DOI] [PubMed] [Google Scholar]
  • 33. Siegal‐Gaskins D. Grotewold E., and Smith G.D.: ‘The capacity for multistability in small gene regulatory networks’, BMC Syst. Biol., 2009, 3, p. 96 (doi: 10.1186/1752-0509-3-96) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Wilhelm T.: ‘The smallest chemical reaction system with bistability’, BMC Syst. Biol., 2009, 3, p. 90 (doi: 10.1186/1752-0509-3-90) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Halasz A.M. Lai H‐J. McCabe Pryor M. Radhakrishnan K., and Edwards J.S.: ‘Analytical solution of steady‐state equations for chemical reaction networks with bilinear rate laws’, IEEE/ACM Trans. Comput. Biol. Bioinfor., 2013, 10, (4), pp. 957–969. (doi: 10.1109/TCBB.2013.41) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Griva I. Nash S.G., and Sofer A., ‘Linear and nonlinear optimization’ (Society for Industrial Mathematics, Philadelphia, 2008, 2nd ed.) [Google Scholar]
  • 37. Papadimitriou C.H., and Steiglitz K., ‘Combinatorial optimization: algorithms and complexity’ (Prentice‐Hall, Inc., Upper Saddle River, NJ, USA, 1982) [Google Scholar]
  • 38. Williams H.P., ‘Logic and integer programming’ (Springer Publishing Company, Incorporated, 2009, 1st edn.) [Google Scholar]
  • 39. Guillen‐Gosalbez G. Miro A. Alves R. Sorribas A., and Jimenez L.: ‘Identification of regulatory structure and kinetic parameters of biochemical networks via mixed‐integer dynamic optimization’, BMC Syst. Biol., October 2013, 7, (1), p. 113. (doi: 10.1186/1752-0509-7-113) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Gnacadja G. Shoshitaishvili A., and Gresser M.J. et al.: ‘Monotonicity of interleukin‐1 receptor‐ligand binding with respect to antagonist in the presence of decoy receptor’, J. Theor. Biol., February 2007, 244, (3), pp. 478–488 (doi: 10.1016/j.jtbi.2006.07.023) [DOI] [PubMed] [Google Scholar]
  • 41. Sontag E.D.. Online text: Lecture Notes on Mathematical Systems Biology. http://www.math.rutgers.edu/~sontag/systems_biology_notes.pdf, 2014.
  • 42. Ingalls B., ‘Mathematical modeling in systems biology’ (MIT Press, 2013) [Google Scholar]

Articles from IET Systems Biology are provided here courtesy of Wiley

RESOURCES