Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Feb 26.
Published in final edited form as: Proc IEEE Conf Decis Control. 2019 Jan 21;2018:6938–6944. doi: 10.1109/cdc.2018.8618649

Reprogramming cooperative monotone dynamical systems

Rushina Shah 1, Domitilla Del Vecchio 1
PMCID: PMC7043062  NIHMSID: NIHMS1556117  PMID: 32103850

Abstract

Multistable dynamical systems are ubiquitous in nature, especially in the context of regulatory networks controlling cell fate decisions, wherein stable steady states correspond to different cell phenotypes. In the past decade, it has become experimentally possible to “reprogram” the fate of a cell by suitable externally imposed input stimulations. In several of these reprogramming instances, the underlying regulatory network has a known structure and often it falls in the class of cooperative monotone dynamical systems. In this paper, we therefore leverage this structure to provide concrete guidance on the choice of inputs that reprogram a cooperative dynamical system to a desired target steady state. Our results are parameter-independent and therefore can serve as a practical guidance to cell-fate reprogramming experiments.

I. Introduction

Multistability, that is, the co-existence of multiple asymptotically stable steady states, is a common feature of many dynamical systems, especially of those capturing the dynamics of gene regulatory networks (GRNs) implicated in cell fate decisions [1]. In these systems, each stable steady state typically represents one specific cell phenotype, such as skin, blood, or pluripotent cell types, and transitions from less differentiated to more specialized phenotypes are orchestrated in the natural process of cell differentiation [2]. For decades, a popular metaphor due to Waddington [3] was used to explain the concept that the process of cell differentiation is irreversible: a ball (the cell phenotype) rolls down a hill under the effect of gravity starting from the top of the hill (pluripotent stem cell type) and ending in the lowest basins (terminally differentiated cells).

It was only in recent years that ground-breaking experiments demonstrated that the process can actually be reversed [4], although with very low efficiency [5], and that cell types can also be interconverted [6], that is, the fate of a cell can be reprogrammed [7]. In reprogramming practices, external (positive or negative) stimulations are applied to select nodes of a GRN, by increasing the rate of production of the transcription factor (TF) in the node (most common approach) or by enhancing its degradation [8]. Selecting the nodes where the input stimulation needs to be applied and the required stimulation type (positive or negative) for triggering a desired state transition relies chiefly on trial-and-error experiments, guided by biological intuition [9].

Many GRNs involved in important cell fate decisions have been experimentally characterized, such that at least the topology of the network is known [7], [10], [11], [12]. Examples include the so-called fully connected triad [10], describing the core pluripotency network controlling maintenance of pluripotency; the PU.1/GATA1 network [12] controlling transition to the myeloid lineage or to the erythroid lineage from the multipotent common myeloid progenitor cell type; and more extended regulatory networks in which these core motifs are included (see [13], for example). It turns out that these core network motifs belong to the class of monotone dynamical systems (cooperative or competitive) [14] or can be decomposed into interconnection of monotone systems [15], [16]. In particular, the pluripotency network (see [17]) and the PU.1/GATA1 network, as we demonstrate in this paper, belong to the class of generalized cooperative systems [14].

Theoretical studies of multistability in monotone dynamical systems have appeared before, most notably in the works of [18], [19], [20], which provide easily checkable graphical conditions for characterizing global stability behavior and apply these general checks to biological systems. In [21], a theoretical analysis of bistable monotone systems is performed to design pulse-based inputs to switch steady states. Apart from these theoretical works, most of the available studies of multistability typically take a computational approach through either bifurcation tools [22] or through sampling-based methods to determine parameter conditions for a desired stability landscape [23], [24]. Multi-stability of specific systems such as the pluripotency network and the PU.1/GATA1 network has been subject of a number of studies in the systems biology literature [11], [12]. These works investigate parameter conditions under which the system under study can be bistable or tristable, and some of these also study how some input parameters can be transiently changed in order to trigger a transition between the steady states. The approaches used in these studies commonly rely on graphical methods, such as nullcline analysis for systems in two dimensions, bifurcation analysis of one parameter at the time, and computational simulation to explore parameter spaces with sampling-based methods.

In this paper we focus on the class of generalized cooperative dynamical systems with inputs and address the question of what nodes need to be stimulated with what input (positive or negative) to trigger a transition to a desired target stable steady state. In particular, we leverage the theory of generalized cooperative dynamical systems [14] to provide general criteria based only on system’s structure (as opposed to parameter values) and input type (positive or negative) to select appropriate stimulation for a given state reprogramming task. To this end, the paper is organized as follows. In Section II, we describe the PU.1/GATA1 network as a motivating example. In Section III, we formally define generalized cooperative monotone dynamical systems, and state the problem definition. In Section IV, we present our results, and apply them to the PU.1/GATA1 network in Section V. Finally, in Section VI, we present our conclusions.

II. Motivating example

We consider the interaction network between transcription factors PU.1 and GATA1, known to be the core network controlling lineage specification of hematopoietic stem cells (HSCs), which give rise to all the blood cells [12]. PU.1 and GATA1 mutually repress each other, while also undergoing self-activation. This interaction network motif is shown in Fig. 1A. The motif results in three stable steady states: one characterized by a high concentration of PU.1 and a low concentration of GATA1, which corresponds to the myeloid lineage; one characterized by a low concentration of PU.1 and a high concentration of GATA1, which corresponds to the erythrocyte lineage; and one characterized by an intermediate level of PU.1 and GATA1, which corresponds to the progenitor cell.

Fig. 1:

Fig. 1:

The PU.1-GATA1 system. (A) The interaction graph between the two species: PU.1 denoted here as X1, and GATA1 denoted here as X2. Each species represses the other, while also self-regulating in the form of self-activation. (B) The nullclines of system (1), steady states (stable represented by filled and unstable by empty circles) and the vector-field. Steady state S1 with high GATA1 and low PU.1 represents the erythrocyte lineage, steady state S2 with low GATA1 and high PU.1 represents the myeloid lineage, and the intermediate steady state S0 represents the progenitor state. The parameter values used are: α1 = α2 = 5 nM/s, β1 = β2 = 5 nM/s, k1 = k3 = 1 nM, k2 = k4 = 2 nM, γ1 = γ2 = 5 s−1, n1 = n2 = n3 = n4 = 2.

Multiple ordinary differential equation (ODE) models that capture these interactions and give rise to tristability have been proposed [25]. For the purpose of this example, we use the following Hill function based description of the system:

x.1=β1+α1(x1k1)n11+(x1k1)n1+(x2k2)n2γ1x1,x.2=β2+α2(x2k3)n31+(x2k3)n3+(x1k4)n4γ2x2. (1)

Here, x1 and x2 are the concentrations of the two species, PU.1 and GATA1, β1, β2 are the rate constants of leaky expression of the species, α1, α2 are the activation rate constants, k1, k2, k3 and k4 are the apparent dissociation constants, n1, n2, n3 and n4 are the Hill function coefficients, and γ1, γ2 are the decay rate constants of the species.

This ODE model, for certain parameter values, is tristable (with three stable steady states, and two unstable steady states). The nullclines and steady states for such a tristable system are shown in Fig. 1B. Here, steady states S1 and S2 represent the differentiated states, the erythrocyte lineage and the myeloid lineage, respectively. The state S0 represents the undifferentiated progenitor cell. The key question for reprogramming cells (converting one cell type to another using external inputs) is then a question of reachability of these different steady states. In particular, we consider constant external inputs such that the trajectory of the system under this input converges inside the region of attraction of the desired steady state. Once this external input is removed, the system’s trajectory then converges to this steady state. The question we ask, then, is when such an input exists, that can trigger a transition to a given steady state, for example S0, starting from either a particular initial state (such as S1 or S2) or from any initial state, and further, what this input is. For a specific 2D system as in eqn. (1), it is possible to gain insight into these questions using geometric intuition from nullcline analysis. However, the way in which these nullclines change with parameters may be non-trivial, and hence it may be difficult to obtain a definite answer. For systems with dimension higher than two, geometric intuition is often not possible. Therefore we seek a strategy for selecting the appropriate inputs for reprogramming based on the structure of the underlying network (and not specific parameter values) and valid for high-dimensional systems. To this end, we consider the reprogramming problem for multistable, cooperative monotone dynamical systems, of which the PU.1/GATA1 network of Fig1(A) is an example. The next section formally defines these terms.

III. Background: System and problem definition

A. Cooperative monotone dynamical systems

This section formally defines cooperative monotone dynamical systems. We first define a partial order “ ≤ ” to compare two vectors in Rn. We then use this definition of a partial order to define a cooperative monotone dynamical system. These systems describe some commonly occurring multi-stable biological network motifs. They have properties that allow geometric reasoning to be used to obtain strong results on reprogrammability, and further, are easily recognized by their graphical structure.

Definition 1: A partial order ≤ on a set S is a binary relation that is reflexive, antisymmetric, and transitive. That is, for all a, b, cS, the following are true:

  1. Reflexivity: aa.

  2. Antisymmetry: ab and ba implies that a = b.

  3. Transitivity: ab and bc implies that ac.

Examples. On the set S=Rn, the following are partial orders:

  1. xy if xiyi for all i ∈ {1, …, n}.

  2. xy if xiyi for iI1 and xjyj for jI2, where I1I2 = {1,…, n}.

To more easily represent the partial orders above, we introduce some notations from [14]. Let m = (m1, m2, …, mn), where mi ∈ {0,1}, and

Km={xRn:(1)mixi0,1in}.

Km is an orthant in Rn, and generates the partial order ≤m defined by xm y if and only if yxKm. The negative orthant, −Km, is then such that −Km := {−x|xKm}. We write x <m y when xm y and xy, and xm y when xm y and xiyi, ∀i ∈ {1, …, n}. Note that, for the examples above, the corresponding m is: (i) mi = 0 ∀i ∈ {1, …, n}, i.e., Km=R+n; (ii) mi = 0, ∀iI1, and mj = 1 , ∀jI2.

We consider a system Σu of the form: x.=f(x,u) with xXR+n and uUR+p a constant input vector. Let the flow of system Σu starting from x = x0 be denoted by ϕu(t, x0). The flow of the system with u = 0 is denoted by ϕ0(t, x0). The domain X is said to be pm-convex if tx + (1 − t)yX whenever x, yX, 0 < t < 1, and xm y [14].

Definition 2: System Σu is said to be a cooperative monotone system with respect to Km if domain X is pm-convex and

(1)mi+mjfixj(x,u)0,ij,xX,uU. (2)

For convenience, we include Proposition 5.1 from [14] here, stated as a Lemma:

Lemma 1: [14] Let X be pm-convex and f be a continuously differentiable vector field on X such that (2) holds. Let <r denote any one of the relations ≤m, <m, ≪m. If x <r y, t > 0 and ϕu(t, x) and ϕu(t, y) are defined, then ϕu(t, x) <r ϕu(t, y).

A cooperative monotone dynamical system is easily recognized by its graphical structure. Assume that the system Σu is sign-stable (i.e., fixj(x,u), ij keeps the same sign for all xX) and sign-symmetric (i.e., fixjfjxi0 for all xX). We consider the graph G corresponding to system Σu with n nodes where an undirected edge connects two nodes i, j if at least one of fixj or fjxi has a non-zero value somewhere in X. Assign a “+” or “−” sign depending on the sign of the partial derivative of the edge. Then Σu is cooperative in X if and only if for every closed loop in G, the number of edges with a “−” sign is even [14].

Consider the extended system Σu:

x.=f(x,u),u.=0, (3)

with states xXR+n and uUR+p. Since u.=0, the trajectories x(t) for this system with u(0) = u0 are the same as that of the original system Σu0:x.=f(x,u0). We state the following result for this extended system, paraphrasing Corollary 3.4 from [26]:

Lemma 2: [26] If system Σu:x.=f(x,u) is cooperative with respect to Km for a fixed u, then the extended system Σu is cooperative with respect to Km × Km, where m=(m1,m2,,mp) and mk{0,1}, if and only if ∀i ∈ {1, …, n}, ∀k ∈ {1, …, p}, ∀xX, ∀uU:

(1)mi+mkfiuk(x,u)0.

Corollary 1: Consider case with u=(v1,..,vn,w1,..,wn)UR+2n, where f(x, u) takes the form f = (f1(x, v1, w1), .., fi(x, vi, wi), .., fn(x, vn, wn)), i.e., each state xi is an given input vi (responsible for positive stimulation with fi(x,vi,wi)vi0 for xX), and an input wi (responsible for negatvie stimulation with fi(x,vi,wi)wi0 for all xX). The extended system is then cooperative with respect to Km × Km × −Km. We denote the corresponding partial order by ≤m×m×−m.

B. Problem definition: Reprogrammability of multi-stable systems

We consider a dynamical system Σu of the form:

x.=f(x,u), (4)

where state xXR+n and a constant input vector uUR+2n. Let S be the set of stable steady states of the system Σ0:x.=f(x,0). Further, we let Ru(S) denote the region of attraction of a stable steady state S of system Σu. The region of attraction Ru(S) is the set of all states x such that limt→∞ ϕu(t, x) = S [27].

We define two concepts of reprogrammability. For system Σ0 to be strongly reprogrammable to a steady state S0S, there must exist an input u such that a trajectory of Σu starting from any initial condition, must converge inside the region of attraction (defined with respect to Σ0) of S0. When the input is removed, then, the system’s trajectory converges to the desired steady state S0. We say that the system Σ0 is weakly reprogrammable to a steady state S from another steady state S if there exists an input u such that a trajectory of Σu starting from S converges to the region of attraction of S, defined with respect to Σ0. These two concepts are formalized below in Definitions 3 and 4.

Definition 3: We say that system Σ0 is strongly reprogrammable to a steady state SS provided there is a constant input uU such that for system Σu, for all x0R+n, the omega-limit set ωu(x0) is such that ωu(x0)R0(S).

Definition 4: We say that system Σ0 is weakly reprogrammable to a steady state SS from a steady state SS, with SS, provided there exists a constant input uU such that the omega-limit set ωu(S) is such that ωu(S)R0(S).

To state our results about reprogrammability for cooperative, monotone dynamical systems, we make the following assumptions on Σu.

Assumption 1: The function f(x,u) is C1 continuous. The trajectories x(t) of Σu are bounded for any constant u and for all t ≥ 0.

Assumption 2: The system Σu is a monotone cooperative system with respect to some Km (Definition 2).

Assumption 3: The input u=(v1,..,vn,w1,..,wn)UR+2n and the function f(x, u) takes the form f(x, u) = (f1(x, v1, w1), .., fi(x, vi, wi), .., fn(x, vn, wn)), i.e., each state xi takes constant inputs vi and wi, and further, fi(x,u)vi0 and fi(x,u)wi0.

Assumption 4: The system Σ0 takes the form: fi(x, 0) = Hi(x) − γix, where Hi(x) ∈ C1, 0 < Hi(x) ≤ HiM, ∀xX, and γi is a positive constant. Inputs to the system are given as follows: fi(x, vi, wi) = Hi(x) − γixi + viwixi, where vi, wi ≥ 0. Further, the domain X is such that Πi=1n[0,HiMγi]X.

Note that when system Σu satisfies Assumption 4, it also satisfies Assumption 1. Since Hi(x) ∈ C1, the function fi(x, ui) ∈ C1, and therefore f(x, u) is C1 continuous. Further, when xi>HiM+viγi+wi, x.i<0. Thus, xi(t)max(HiM+viγi+wi,xi(0)) for all t ≥ 0. Thus, the trajectories of the system Σu are bounded for any given u=(v1,..,vn,w1,..,wn)R+2n and for all t ≥ 0.

Assumption 5: For u close to 0, the steady states of the system Σu are locally unique and continuous around u = 0.

IV. Results

This section states results about the reprogrammability of steady states in cooperative monotone dynamical systems. The question we wish to address is: for each steady state in S, what inputs, if any, make the system strongly reprogrammable to that steady state, and what inputs, if any, make a given steady state weakly reprogrammable to another given steady state. We first show that the set of steady states of Σ0, S, has a minimum and a maximum. We then present theorems that provide a strategy for selecting the inputs required to strongly reprogram system Σ0 to these minimal and maximal steady states of Σ0. Further, our results rule out certain key input types to strongly reprogram system Σ0 to other intermediate steady states. Based on this set of results, possible strategies are proposed to reprogram system Σ0 to intermediate steady states. To present our results, we first define the following exhaustive list of mutually exclusive input types:

  1. Input of type 1: An input of type 1 satisfies the following: for all i ∈ {1, …, n}, if mi = 0 then vi ≥ 0 and wi = 0 (positive or no simulation), and if mi = 1 then vi = 0 and wi ≥ 0 (negative or no simulation). Further, at least one node has input not identically 0 everywhere.

  2. Input of type 2: An input of type 2 satisfies the following: for all i ∈ {1, …, n}, if mi = 1 then vi ≥ 0 and wi = 0 (positive or no simulation), and if mi = 0 then vi = 0 and wi ≥ 0 (negative or no simulation). Further, at least one node has input not identically 0 everywhere.

  3. Input of type 3: An input such that, there exists at least one i ∈ {1, …, n} such that if mi = 0, vi ≥ 0 and wi = 0 and if mi = 1, vi = 0 and wi ≥ 0 (and input not identically zero everywhere); and at least one j ∈ {1, …, n} such that if mj = 0, vj = 0 and wj ≥ 0 and if mj = 1, vj ≥ 0 and wj = 0 (and input not identically zero everywhere).

Lemma 3: Under Assumptions 1, 2 and 4, the set of steady states S of system Σ0 has a minimum and a maximum with respect to the partial order ≤m.

Proof: We first prove that the set S has a maximum with respect to ≤m. Consider x such that, ∀i ∈ {1, .., n}, xi=(1mi)HiMγi. Under Assumption 4, any equilibrium S of Σ0 must satisfy the following: fi(S, 0) = hi(S) − γiSi = 0. Thus, Si=hi(S)γi. Under Assumption 4, 0 < hi(x) ≤ HiMxX. Thus, 0 < hi(S) ≤ HiM, and therefore 0<SiHiMγi. Thus, xiSi when mi = 0 and xiSi when mi = 1, implying that xmS for all SS. Then, by Assumption 2, ω0(x)mS for all SS, under Lemma 1. Further, when mi = 0, f(x,0)=hi(x)γixi=hi(x)HiM0 and when mi = 1, f(x,0)=hi(x)γixi=hi(x)>0, and thus, f(x,0)m0. Then for a cooperative monotone dynamical system that is bounded (Assumption 1), by Proposition 2.1 from [14], ω0(x) is a steady state, therefore, ω0(x)S. Thus, ω0(x)=max(S), and therefore S has a maximum. To prove that the set has a minimum, let x be such that xi=miHiMγi. Then, xmS for all SS, and f(x,0)m0. Then, similar to the reasoning above, ω0(x)mS for all SS, and ω0(x)S. Thus, S has a minimum. ■

Remark: Recall that xm y implies that for states where mi = 0, xiyi, and for states where mi = 1, xiyi. Thus, a maximum S* with respect to the partial order ≤m is such that, for states where mi = 0, Si=maxSS(Si), and for states where mi = 1, Si=minSS(Si). Here, Si denotes the ith component of the steady state S. Similarly, a minimum S** with respect to the partial order ≤m is such that, for states where mi = 0, Si=minSS(Si), and for states where mi = 1, Si=maxSS(Si).

In the next two results, we show that inputs of type 1 and 2 can never make system Σ0 strongly reprogrammable to an intermediate steady state.

Theorem 1: Under Assumptions 1, 2 and 3, for any input of type 1, system Σ0 is not strongly reprogrammable to any steady state Smax(S).

Proof: Consider the extended system Σu:x.=f(x,u), u.=0. Notice that for any input u0 of type 1, u0m×−m 0. Note that these initial conditions are ordered, i.e., (max(S),0)m×m×m(max(S),u0). Since (max(S),0) is a steady state of the extended system, by the cooperativity of the extended system (Corollary 1), we have that (max(S)mϕu0(t,max(S)), under Lemma 1. Hence, ωu0(max(S))mmax(S).

We now consider the system Σ0:x.=f(x,0), starting at an initial condition zmmax(S). By the cooperativity of Σ0, we have that ω0(z)mmax(S), under Lemma 1. Since ω0(z)S, we have that ω0(z)=max(S). Thus, for any zmmax(S), zR0(max(S)). Thus, ωu0(max(S))R0(max(S)). That is, for the system Σu with an input of type 1, any trajectory starting at max(S) will converge to a steady state in the region of attraction (for Σ0) of max(S). Thus, Σ0 is not strongly reprogrammable to any steady state other than max(S), since there exists an x0 such that ωu(x0)Ru(S), Smax(S). ■

Theorem 2: Under Assumptions 1, 2 and 3, for any input of type 2, system Σ0 is not strongly reprogrammable to any steady state Smin(S).

Proof: Note that any input u0 of type 2 satisfies u0m×−m 0. Then, the following intial conditions are ordered: (min(S),0)m×m×m(min(S),u0). The rest of the proof is analogous to that of Theorem 1. ■

Lemma 4: [17] For system Σu satisfying Assumption 4, consider the dynamics of a node with positive stimulation: x.i=Hi(x)+viγix and vi ≥ 2HiM. Then, limtxi(t)maxSS(Si) independent of the initial condition.

Lemma 5: For system Σu satisfying Assumption 4, consider the dynamics of a node with negative stimulation: x.i=Hi(x)(γi+wi)xi and wiHiMminSS(Si)γi. Then, limtxi(t)minSS(Si) independent of the initial condition.

Proof: Consider the following systems: z.i=(γi+wi)zi and x~˙i=Hi(x~)(γi+wi)x~i. The second system can be veiwed as the perturbed version of the first system, with Hi(x~) being the disturbance, which is globally bounded by HiM. Then, we can apply the robustness result from contraction theory [28] to obtain: limtx~i(t)0HiMγi+wi. Since x~i(t)0 and wiHiMminSS(Si)γi, we have that limtx~i(t)minSS(Si). Note that since under Assumption 4, Hi(x) > 0. Si ≠ 0 for any i and any S, since fi(x, 0) = Hi(x) − γixi = Hi(x) ≠ 0 for xi = 0. Thus, minSS(Si)0. ■

In Theorem 3, we show that large enough inputs of type 1 can make Σ0 strongly reprogrammable to max(S), and inputs of type 2 can make Σ0 strongly reprogrammable to min(S).

Theorem 3: Under Assumptions 2, 3 and 4, a sufficiently large input of type 1 ensures that Σ0 is strongly reprogrammable to the steady state max(S), and a sufficiently large input of type 2 ensures that Σ0 is strongly reprogrammable to the steady state min(S).

Proof: Consider a u¯=(v¯1,..,v¯n,w¯1,..,w¯n) such that v¯i=2(1mi)HiM and w¯i=mi(HiMminSS(Si)γi). Then using Lemma 4, we have that for mi = 0, limtxi(t)maxSS(Si) for all xi(0). Using Lemma 5, we have that for mi = 1, limtxi(t)minSS(Si) for all xi(0). Note that if x, y are such that for a state where mi = 0, xiyi, and for a state where mi = 1, xiyi, then xm y. Thus, ωu(x0)mmax(S), ∀x0 and uu¯ (element-wise) with an input of type 1. By monotonicity, if zmmax(S), ω0(z)=max(S). Thus, ωu(x0)R0(max(S))x0. Thus, Σ0 is strongly reprogrammable to max(S).

Consider a u¯=(v¯1,..,v¯n,w¯1,..,w¯n) such that v¯i=2miHiM and w¯i=(1mi)(HiMminSS(Si)γi). Then, using Lemma 4, we have that for mi = 1, limtxi(t)maxSS(Si) for all xi(0). Using Lemma 5, we have that for mi = 0, limtxi(t)minSS(Si) for all xi(0). Using the same reasoning as above, we have that ωu(x0)mmin(S), ∀x0 and uu¯ (element-wise) with an input of type 2. Under Lemma 1, if zmmin(S), ω0(z)=min(S). Thus, ωu(x0)R0(min(S))x0. Thus, Σ0 is strongly reprogrammable to min(S). ■

Finally, in Theorem 4, we analyze the weak reprogrammability of Σ0 to intermediate steady states using inputs of type 1 and type 2.

Theorem 4: Consider two steady states S, SS such that S<mS. Let system Σu satisfy assumptions 2, 3 and 4. Then the following is true:

  1. Σ0 is not weakly reprogrammable to S from S for any input of type 2,

  2. There exist a u′, uR+2n such that for an input u of type 1 with uu′, or uu″, Σ0 is not weakly reprogrammable to S from S if Smax(S),

  3. Σ0 is not weakly reprogrammable to S from S for any input of type 1, and

  4. There exist a u, uR+2n such that for an input u of type 2 with uu, or uu, Σ0 is not weakly reprogrammable to S from S if Smin(S).

Proof: (a) Consider the extended system Σu:x.=f(x,u), u.=0 with an input of type 2. Then, by Corollary 1, we see that the extended system is a monotone cooperative system with respect to Km × Km × −Km. Further, any input of type 2 is such that um×−m 0. Then, (S, 0) ≥m×m×−m (S, u) for all u of type 2. Thus, under Lemma 1, ϕ0(t, S) = Sm ϕu(t, S) for all u of type 2. Thus, ωu(S) ≤m S, and therefore ωu(S)R0(S) since Sm, for all u. Thus, for an input of type 2, Σ0 is not weakly reprogrammable to S from S.

(b) Consider Σu with u close to 0. Under Assumption 5, x(u) is a locally unique solution to f(x, u) = 0; furthermore x(u) is a continuous function of u. Therefore, for u sufficiently close to 0, we will have that x(u) is close to S. We can thus pick u small enough such that x(u) is in the region of attraction of S. Therefore, there is an input u′ sufficiently close to zero such that if uu′, the system is not reprogrammed from S to S.

The fact that there exists a u″ sufficiently large that if uu″, the system is not reprogrammed to S but in fact to max S follows from Theorem 3.

(c), (d): The proof is similar to that for (a), (b). ■

Theorem 4 shows that inputs of type 1 and 2 are not suited to achieve even weak reprogrammability to intermediate steady states. We note that, if u′ > u″ or u>u, there is no input of type 1 or 2 that allows weak reprogrammability to an intermediate steady state. Further, even if this were not the case, u′, u″, u and u depend on the parameters of the system. The success of reprogramming using such an input would therefore be highly susceptible to uncertainty in both parameters and initial states. Therefore, reprogramming to an intermediate steady state may be more promising using inputs of type 3. We note however that, as the system under consideration increases in dimension, the number of type 3 inputs would increase combinatorially with the number of dimensions. We leave this question for future work.

To summarize, we provide an intuitive explanation for why certain input types are better suited to reprogram the system to specific steady states. Inputs of type 1 are the “maximizing” inputs. For nodes where the partial order ≤m implies xiyi, we apply a positive stimulation (thus attempting to increase the concentration of that node). For nodes where the partial order ≤m implies xiyi, we apply a negative stimulation. Both these make it so that the new state that the system ends up in, is larger (in the sense of the partial order ≤m) than the initial state. Thus, this “maximizing” input reprograms the system to the maximum steady state, when large enough. Similarly, inputs of type 2 are “minimizing” inputs, and reprogram the system to the minimum steady state. Inputs of type 3, on the other hand, are “balancing” inputs. They result in a state that is “disordered” (with respect to ≤m) compared to the initial state of the system, and thus, might work to reprogram the system to intermediate steady states. In the next section, we test these results on the PU.1-GATA1 network.

V. Application of results to the motivating example

In this section, we return to the motivating example of Section II. We apply the results of Section IV and discuss strategies for reprogramming the system to the three different steady states S0, S1 and S2.

We first note that the PU-GATA network satisfies the graphical test for being a cooperative monotone dynamical network: it is sign-stable, sign-symmetric, and for every closed-loop (in this case none) in the interaction graph, the number of edges with a “-” sign is even. We further note that, since f1x2, f2x10, the system is cooperative with respect to the cone Km where m = (0, 1). According to this cone, we see that Lemma 3 holds for the set S={S0,S1,S2} of the stable steady states of the system. We have that min(S)=S1 and max(S)=S2, where min and max are defined with respect to the partial order ≤m.

Under Theorem 1, an input of type 1 for this system, which consists of either a positive stimulation on node 1 (X1) or a negative stimulation on node 2 (X2) or both, cannot strongly reprogram system (1) to any steady state besides S2. This is because an input of type 1 causes x1 to increase and/or x2 to decrease, and always results in a stable steady state that lies in the region of attraction of S2. Thus, for any input of type 1, there always exists some initial condition such that the system is reprogrammed to S2. Thus, an input of type 1 cannot strongly reprogram the system to any steady state besides S2. Similarly, under Theorem 2, an input of type 2, with either a negative stimulation on X1, a positive stimulation on X2, or both, cannot strongly reprogram system (1) to any steady state besides S1.

Under Theorem 3, a sufficiently large input of type 1 makes system (1) strongly reprogrammable to S2, and a sufficiently large input of type 2 makes it strongly reprogrammable to S1. We use nullcline analysis to validate that this is indeed the case. We would like to note here that nullcline analysis is parameter dependent and any result obtained with the nullclines is specific to the parameter set used. The results that we have obtained in Section IV are parameter independent and therefore general as they rely only on the graph structure of the system. In Fig. 2A, we show the nullclines for the system with a large input of type 1, and show that there is one globally asymptotically stable steady state. Trajectories starting from any initial condition converge to this steady state. Since this steady state has a very high x1 and a very low x2, it is in the region of attraction of S2 (compare to Fig. 1B). The removal of this input would the cause the system’s trajectory to converge to S2. Similarly, a large input of type 2 results in the nullclines shown in Fig. 2B. The resulting system has a globally asymptotically stable steady state in the region of attraction of S1, making the system strongly reprogrammable to S1.

Fig. 2:

Fig. 2:

Nullclines of the systems with different inputs. The new steady states are denoted by the green star. (A) Large input of type 1: the resulting globally asymptotically stable steady state has very low levels of x2 (GATA1) and very high levels of x1 (PU.1). The inputs are v1 = 10nM/s, w1 = 0, v2 = 0, and w2 = 28s−1. (B) Large input of type 2: the resulting globally asymptotically has very high levels of x2 and very low levels of x1. The inputs are v1 = 0, w1 = 28s−1, v2 = 10nM/s, and w2 = 0. (C) Large input of type 3: the resulting globally asymptotically stable steady state has very high (and comparable) levels of x2 and x1. The inputs are v1 = 10nM/s, w1 = 0, v2 = 10nM/s, and w2 = 0. (D) Large input of type 3: the resulting globally asymptotically stable steady state has very low levels of x2 and x1. The inputs are v1 = 0, w1 = 28s−1, v2 = 0, w2 = 28s−1.

Finally, we look into reprogramming the system to the intermediate steady state S0. By Theorems 1 and 2, inputs of type 1 and 2 cannot strongly reprogram the system to S0. Further, while under Theorem 4, it could be possible to weakly reprogram the system from S1 to S0 for a specific range of inputs of type 1, and from S2 to S0 for a specific range of inputs of type 2, such ranges, if they do exist, would be dependent on the parameters of the system.

Therefore, instead, we investigate strong reprogrammability to S0 using inputs of type 3. There are two possibilities for an input of type 3: either a positive stimulation on both nodes, or a negative stimulation on both nodes. We apply large inputs of type 3 to the system, and the resulting nullclines are shown in Fig. 2C (with both nodes receiving positive stimulation) and in Fig. 2D (with both nodes receiving negative stimulation). Large positive stimulation results in a globally asymptotically stable steady state with x1 and x2 both large (and comparable). This steady state is in the region of attraction of S0 (compare to Fig. 1B), making the system strongly reprogrammable to S0. However, if the difference between v1 and v2 were large (or the system were not symmetric), this input might still result in a steady state in the region of attraction of S1 or S2. On the other hand, a large negative input on both, even if the two inputs were different, results in a steady state near (0,0) as long as both inputs are large enough. This steady state is in the region of attraction of S0 (compare to Fig. 1B), making the system strongly reprogrammable to S0. Thus, we see that both inputs of type 3 can make the system strongly reprogrammable to S0.

VI. Conclusions and Discussion

In this work, we have shown that inputs of type 1 can strongly reprogram a cooperative monotone dynamical system to a steady state if and only if that steady state is the maximal steady state. Similarly, we showed that inputs of type 2 can achieve strong reprogrammability if and only if the desired steady state is the minimal steady state. since strong reprogrammability implies that the system’s state converges to the desired steady state independent of initial conditions, in practice, this can be interpreted to mean that a population of cells (each at a different state in the state-space) could be given the same input, and reprogrammed to the desired state (cell type). Further, we showed that there may or may not exist a range of inputs of type 1 (or type 2) that can weakly reprogram the system from S to S (or S to S) if S<mS and Smax(S)(Smin(S)). For the PU.1/GATA1 system, we therefore considered inputs of type 3, which were successful in strongly reprogramming the system to its intermediate steady state S0.

We would like to point to the core network responsible for pluripotency and the self-renewal of embryonic stem cells [10]. This network is cooperative and can be tristable, with the intermediate steady state corresponding to the pluripotent state [17]. The most common strategy used for reprogramming this system to pluripotency is the over-expression of key factors [5]- an input of type 1. As discussed here, such a strategy is fragile. If a range of type 1 inputs does exist that can (weakly) reprogram the system to pluripotency, it would be parameter-dependent, and possibly hard to achieve experimentally. This could be contributing to the low efficiency of current reprogramming strategies. Based on our work here, we recommend looking at inputs of type 3 to reprogram this system to pluripotency, which we leave for future work.

Like the pluripotency network and the PU.1/GATA1 network, many GRNs controlling cell-fate decisions are monotone (or decomposable into an interconnection of monotone systems) [15], [16]. This work therefore provides a parameter-independent strategy to select inputs that could achieve more efficient cellular reprogramming in cooperative monotone systems.

Acknowledgments

This work was supported in part by NIH Grant number 1-R01-EB024591-01.

References

  • [1].Furusawa C and Kaneko K. A dynamical-systems view of stem cell biology. Science, 338, 2012. [DOI] [PubMed] [Google Scholar]
  • [2].Wanga J, Zhanga K, Xua L, and Wanga E. Quantifying the waddington landscape and biological paths for development and differentiation. Proc. Natl. Acad. Sci., USA, 108, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Waddington CH. The strategy of the genes; a discussion of some aspects of theoretical biology. Allen & Unwin, London, 1957. [Google Scholar]
  • [4].Takahashi K and Yamanaka S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell, 126(4):663–676, 2006. [DOI] [PubMed] [Google Scholar]
  • [5].Schlaeger T and Daheron. A comparison of non-integrating reprogramming methods. Nat Biotech, 33(1):58–63, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Nerlov C and Graf T. Pu.1 induces myeloid lineage commitment in multipotent hematopoietic progenitors. Genes & Development, 12(15):2403–2412, 08 1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Graf T and Enver T. Forcing cells to change lineages. Nature, 462(7273):587, 2009. [DOI] [PubMed] [Google Scholar]
  • [8].Buganim Y, Faddah DA, and Jaenisch R. Mechanisms and models of somatic cell reprogramming. Nature Reviews, 14, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Morris S, Cahan P, Li H, Zhao A, San Roman A, Shivdasani R, Collins J, and Daley G. Dissecting engineered cell types and enhancing cell fate conversion via cellnet. Cell, 158(4):889–902, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Boyer L, Lee T, Cole M, Johnstone S, Levine S, Zucker J, Guenther M, Kumar R, Murray H, Jenner R, Gifford D, Melton D, Jaenisch R, and Young R. Core transcriptional regulatory circuitry in human embryonic stem cells. Cell, 122(6):947–956, 2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Huang S. Reprogramming cell fates: Reconciling rarity with robustness. BioEssays, 31:546–560, 2009. [DOI] [PubMed] [Google Scholar]
  • [12].Huang S, Guo Y, May G, and Enver T. Bifurcation dynamics in lineage-commitment in bipotent progenitor cells. Developmental biology, 305(2):695–713, 2007. [DOI] [PubMed] [Google Scholar]
  • [13].Kim J, Chu J, Shen X, Wang J, and Orkin S. An extended transcriptional network for pluripotency of embryonic stem cells. Cell, 132(6):1049–1061, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Smith HL. Monotone dynamical systems: an introduction to the theory of competitive and cooperative systems. Number 41 American Mathematical Soc., 2008. [Google Scholar]
  • [15].Enciso GA, Smith HL, and Sontag ED. Non-monotone systems decomposable into monotone systems with negative feedback. J. of Differential Equations, 224:205–227, 2006. [Google Scholar]
  • [16].Sontag ED. Monotone and near-monotone biochemical networks. Systems and Synthetic Biology, 1:59–87, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Del Vecchio D, Abdallah H, Qian Y, and Collins J. A blueprint for a synthetic genetic feedback controller to reprogram cell fate. Cell systems, 4(1):109–120, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Angeli D and Sontag ED. Multi-stability in monotone input/output systems. Systems Control Lett, 51:185–202, 2004. [Google Scholar]
  • [19].Angeli D, Ferrell JE, and Sontag ED. Detection of multistability, bifurcations, and hysteresis in a large class of biological positive-feedback systems. Proc. Natl. Acad. Sci., USA, 101:1822–1827, 2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Nikolaev EV and Sontag ED. Quorum-sensing synchronization of synthetic toggle switches: A design based on monotone dynamical systems theory. PLoS Comput Biol, 12, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Sootla A, Oyarzún D, Angeli D, and Stan G. Shaping pulses to control bistable systems: Analysis, computation and counterexamples. Automatica, 63:254–264, 2016. [Google Scholar]
  • [22].Wiggins Stephen. Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer-Verlag, 2003. [Google Scholar]
  • [23].Wang L, Su R, Huang Z, Wang X, Wang W, Grebogi C, and Lai Y. A geometrical approach to control and controllability of nonlinear dynamical networks. Nature communications, 7:11323, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Crespo I, Perumal T, Jurkowski W, and Del Sol A. Detecting cellular reprogramming determinants by differential stability analysis of gene regulatory networks. BMC systems biology, 7(1):140, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Duff C, Smith-Miles K, Lopes L, and Tian T. Mathematical modelling of stem cell differentiation: the pu. 1-gata-1 interaction. Journal of mathematical biology, 64(3):449–468, 2012. [DOI] [PubMed] [Google Scholar]
  • [26].Angeli David and Sontag Eduardo D. Monotone control systems. IEEE Transactions on automatic control, 48(10):1684–1698, 2003. [Google Scholar]
  • [27].Khalil H. Nonlinear systems, volume 3 Prentice Hall, 2002. [Google Scholar]
  • [28].Del Vecchio D and Slotine J. A contraction theory approach to singularly perturbed systems. IEEE Transactions on Automatic Control, 58(3):752–757, 2013. [Google Scholar]

RESOURCES