Skip to main content

This is a preprint.

It has not yet been peer reviewed by a journal.

The National Library of Medicine is running a pilot to include preprints that result from research funded by NIH in PMC and PubMed.

bioRxiv logoLink to bioRxiv
[Preprint]. 2025 May 15:2025.05.11.653320. [Version 1] doi: 10.1101/2025.05.11.653320

The Dynamics of Inducible Genetic Circuits*

Zitao Yang 1,, Rebecca J Rousseau 1,, Sara D Mahdavi 2,, Hernan G Garcia 3,4,5,6,7, Rob Phillips 1,2
PMCID: PMC12132222  PMID: 40463218

Abstract

Genes are connected in complex networks of interactions where often the product of one gene is a transcription factor that alters the expression of another. Many of these networks are based on a few fundamental motifs leading to switches and oscillators of various kinds. And yet, there is more to the story than which transcription factors control these various circuits. These transcription factors are often themselves under the control of effector molecules that bind them and alter their level of activity. Traditionally, much beautiful work has shown how to think about the stability of the different states achieved by these fundamental regulatory architectures by examining how parameters such as transcription rates, degradation rates and dissociation constants tune the circuit, giving rise to behavior such as bistability. However, such studies explore dynamics without asking how these quantities are altered in real time in living cells as opposed to at the fingertips of the synthetic biologist’s pipette or on the computational biologist’s computer screen. In this paper, we make a departure from the conventional dynamical systems view of these regulatory motifs by using statistical mechanical models to focus on endogenous signaling knobs such as effector concentrations rather than on the convenient but more experimentally remote knobs such as dissociation constants, transcription rates and degradation rates that are often considered. We also contrast the traditional use of Hill functions to describe transcription factor binding with more detailed thermodynamic models. This approach provides insights into how biological parameters are tuned to control the stability of regulatory motifs in living cells, sometimes revealing quite a different picture than is found by using Hill functions and tuning circuit parameters by hand.

I. INTRODUCTION

The first half of the twentieth century was a time in which many of the great mysteries of nineteenth century physics were resolved [14]. Though perhaps less well known, the study of living organisms had enormous mysteries of its own including the molecular and cellular basis of the laws of heredity [5]. One key puzzle centered on a phenomenon known at the time as “enzymatic adaptation” [6, 7]. Those words refer to the apparent induction of enzyme action as a result of changes in the metabolic or physiological state of cells, such as those that occur upon shifting from one carbon source to another [7]. In the nineteenth century, Louis Pasteur had noted that yeasts behave differently under different growth conditions. Frédéric Dienert followed up on those observations with great foresight by doing experiments that quantitatively characterized the phenomenon [7]. Jacques Monod made these studies a fine art through the use of bacterial growth curves “as a method for the study of bacterial physiology and biochemistry” [8] (see Figure 9 of Monod’s paper for a compelling example of the induction phenomenon).

Figure 9:

Figure 9:

Bifurcation diagram for auto-activation system with rate constants r0=0.1, r1=1, r2=20, and cooperativity ω=10. This curve plots all stable (filled nodes) and unstable (unfilled nodes) fixed points, demonstrating a shift at intermediate effector concentrations from one to three fixed points. Each black dashed line denotes a specific effector concentration, pointing to a plot of the corresponding production (blue) and degradation (orange) rates as a function of activator concentration. Curve intersections mark the stable (filled nodes) and unstable (unfilled nodes) fixed points found at the given effector concentration.

As a result of studies like these, in the early 1960s Jacob and Monod shook the world of biology by showing that there are genes whose job it is to control other genes [5, 9], culminating in their repressor-operator model which showed how proteins could bind to DNA and repress the expression of nearby genes [5, 911]. Their original work was extended and amplified through the discovery of architectures that were mediated not only by repression, but by activation as well [12], and even by combinations of activators and repressors [13]. In the late 1960s, the vision was considerably broadened through the generalization of these ideas from their first context in bacteria to the much broader set of regulatory problems associated with animal development such as those schematized in Fig. 1 [14]. The study of the lysis-lysogeny decision in bacteriophage lambda became a paradigm for the genetic switch [15, 16], and in the time since then those ideas have been generalized, realized, and exploited across biology.

Figure 1:

Figure 1:

Gallery of examples of regulatory circuits participating in the genetic decisions of animal development. (A) A three-node network thought to be relevant to the control of digit formation, adapted from [17]. (B) An example involving vulval development in C. elegans, where epidermal growth factor (EGF) and Notch induce cells toward one of three possible fates, adapted from [18] and [19]. (C) Transcription factors compete and maintain cell pluripotency unless sufficiently induced to reprogram a cell to a differentiated fate, adapted from [20].

The repressor-operator model of Jacob and Monod provided not only a successful conceptual vision for gene expression writ large, but also served as the basis of mathematical models of transcription based upon the precepts of statistical mechanics [2123]. These models provided a quantitative description of a variety of different regulatory contexts in which the strengths of binding sites, the repressor copy number and DNA loop length were altered, illustrating how genetic circuits could be tuned directly and quantitatively [2428]. Interestingly, these pioneering studies became a jumping off point for the construction of a number of synthetic variants that when combined with fluorescent reporters made it possible to watch synthetic switches and oscillators in real time in single cells [2931].

In addition to the seminal discoveries of the existence of gene circuits themselves, a parallel set of discoveries unfolded which added a second layer of regulatory control to the original repressor-operator model and its subsequent generalizations and elaborations. Specifically, the mystery of induction required another insight into biological feedback and control. Enzymatic adaptation, the idea that somehow enzymes that were latent would become active in the presence of the right substrate [6], led to the discovery of allostery, a concept that Monod himself referred to as the “second secret of life” [32]. In the context of gene regulation, this idea implies that transcription factors themselves are subject to control through the binding of effector molecules that alter their activity [3340]. Writ large, these insights now fall under the general heading of allosteric transitions, a phenomenon in which proteins of all types undergo conformational changes that alter their activity. This idea applies broadly to ion channels, enzymes, the respiratory protein hemoglobin, membrane receptors mediating chemotaxis and quorum sensing, and of course, to the main subject of our paper, transcription factors [40].

The mathematical analysis of genetic circuits is its own fascinating enterprise, using the tools of dynamical systems to explore the stability of switches and oscillations [4145]. The idea for describing some circuit involving n different proteins is to write dynamical equations of the form

dTFidt=fiTFj, (1)

where there is one such equation for each transcription factor (for which we will often use the shorthand notation). The function on the right side acknowledges that the dynamics of the ith TF can depend upon the concentrations of all the others, represented here by the notation TFj signifying “the set of all n TFs.” Perhaps the simplest such example we will discuss as our first case study is the auto-activation switch as shown in Fig. 2 and described by an equation of the form

dAdt=-γA+r0+r12AKd+r2ωAKd21+2AKd+ωAKd2, (2)

where A is the number of activators, Kd is the dissociation constant for A in its interaction with DNA, and ω is the cooperativity between two activators bound to the gene promoter along the DNA. The first term on the right captures the degradation of activator at rate γ, and the second term characterizes protein production with a basal level of production r0 and a saturating level r2.

Figure 2:

Figure 2:

The auto-activation regulatory circuit. (A) Schematic of the operation of the circuit. Polymerase binding at the promoter (blue) transcribes the gene (encoded in the light green region), producing a protein that can activate its own expression at a sufficient concentration. In our model, an activator can bind at one of two possible sites to enhance gene transcription. (B) Thermodynamic states, weights, and rates for the circuit in the traditional model without induction. The parameter ω denotes the cooperative strength of two activators binding. (C) Thermodynamic states, weights and rates for the case in which the effector tunes the fraction of active activators. Note that in both of these cases the parameters Kd, ω, r0, r1 and r2 are effective parameters that have hidden dependence upon the number of polymerases and the strength with which it binds the promoter. The explicit definitions of these effective parameters are worked out in Appendix A.

The production rate in Eqn. 2, and throughout this work, is modeled using a thermodynamic framework relating promoter occupancy to output [21, 23, 27, 4648]. We note that often instead of adopting the full thermodynamic model to treat promoter occupancy, it is considered convenient to use Hill functions as an approximation to describe the probability of promoter binding [4245]. The auto-activation example will serve as our first foray into the problem of induction of genetic circuits by effector molecules as well as an opportunity to bring some critical scrutiny to the use of Hill functions to describe the physics of occupancy.

Typically, the exploration of the stability behavior of these circuits is based upon varying theoretically accessible parameters such as dissociation constants (Kd), transcription rates (ri) and degradation rates (γ) as featured in Eqn. 2 and in Fig. 3(BD), without reference to how such parameters are themselves controlled by living cells [40]. In fact, often the underlying response is dictated by the presence and absence of experimentally accessible effector molecules that alter the balance between inactive and active forms of key regulatory molecules such as transcription factors as shown in Fig. 3(A). Thus, while all of the tunable parameters in Fig. 3 make it possible to systematically tune the level of gene expression, some of these parameters are more conveniently accessible to the experimentalist and to the cell itself as it rapidly tunes its behavior in response to stimuli. Our goal is to use explicit statistical mechanical models of the induction phenomenon to explore the behavior of genetic circuits as a function of the presence or absence of effectors.

Figure 3:

Figure 3:

Tuning genetic circuits. The schematic shows different knobs which are available to the cell, the theorist and the experimentalist, namely (A) effector concentration (and by consequence the number of active activator or repressor molecules), (B) binding affinity Kd, (C) protein production rate r, and (D) cooperativity ω.

In the next few sections, we work our way through an array of increasingly sophisticated gene regulatory circuits and leverage the statistical mechanical framework to uncover how the presence of effectors dictates complex gene expression dynamics. We envision that the predictions and systematic analysis stemming from our work will make it possible to better understand how cells exploit these genetic circuits to regulate their decision making processes, as well as enable the predictive design of synthetic circuits with prescribed functions in response to input effector dynamics.

II. THE STATISTICAL MECHANICS OF INDUCTION

We now undertake a systematic analysis of a number of different regulatory circuits from the point of view of allosteric regulation, building upon earlier work in which the biological parameters are tuned by hand rather than through the action of effectors [29, 42, 43, 49, 50]. In particular, we argue that often the number of active transcription factors TFact is given by

TFact=pact(c)TFtot, (3)

where TFtot is the total number of transcription factors and pact(c) is the probability that the transcription factor is active as a function of the effector concentration c. We will show that the fraction of transcription factors that are active can be given by the Monod-Wyman-Changeux (MWC) model, which can be used to compute pact(c) using statistical mechanics [3340]. Note that although we invoke the MWC model to describe allostery, we could just as well use the KNF model or even the phenomenological Hill functions to capture the role of the effector [40].

In the MWC model, we consider an inactive state and an active state of the transcription factor with energy difference ε=εi-εa. The states and weights for such an allosteric transcription factor with two binding sites is shown in Fig. 4. Appealing to these states and weights, the probability of a transcription factor being active is then of the form

pact(c)=(1+cKA)2(1+cKA)2+e-βε(1+cKI)2, (4)

where c is the concentration of effector molecules. Here we define β=1/kBT, and KA and KI as the dissociation constants for the transcription factor in its active and inactive states, respectively.

Figure 4:

Figure 4:

States and weights for an allosteric transcription factor with an effector that can bind at two sites on the molecule. The sum of the thermodynamic weights for the active and inactive conformations are shown at the bottom.

An alternative way of thinking about this approach is to express an effective dissociation constant Kdeff between transcription factors and DNA as

Kdeff=Kd/pact(c), (5)

where Kd is the fixed physical dissociation constant and pact(c) modulates the activity of that transcription factor in an effector-concentration-dependent way. For a transcription factor with two effector binding sites, we can write the probability of being in the active state in the form given by Eqn. 4. The activity of the transcription factor as a function of effector concentration is shown in Fig. 5(A). This input-output function has the typical sigmoidal behavior of pact(c). It is worth noting that here we show the behavior of a transcription factor for which the effector renders the proteins inactive. By tuning the relative values of the active and inactive dissociation constants, however, we can also generate situations in which the activity of the transcription factor increases with effector concentration.

Figure 5:

Figure 5:

Activity of a transcription factor as a function of its effector concentration c. Unless stated otherwise, the parameters used throughout this work are: KA=140μM,KI=530nM,ε=4.5kBT. (A) Probability of active transcription factor as a function of effector concentration, defined by Eqn. 4. The half maximal effective concentration EC50, defined as the effector concentration c* such that pactc*=pactmax+pactmin/2, is plotted in purple. (B) The effective dissociation constant Kdeff (dimensionless with respect to Kd) as a function of effector concentration. Saturation of pact(c) corresponds to minimal Kdeff, and leakiness of pact(c) corresponds to maximal Kdeff.

Writing Kdeff=Kd/pact(c) in gene regulatory network motifs is useful and informative in many ways. First, traditionally, theoretical models of the behavior of network motifs are studied by tuning Kdeff, while experiments generally tune the effector concentration c. Incorporating pact bridges theory and experiments by providing experimentally accessible “knobs” to control the behavior of the genetic circuit of interest. Second, since pact(c) is highly nonlinear in c, model variables might react to varying c differently than varying Kdeff. Third, pact(c) constrains the range of accessible Kdeff values. When discussing input-output curves, there are a variety of summary parameters that help us understand their character qualitatively. For example, in the case where effector binding renders the protein inactive, the leakiness is the amount of activity at saturating concentrations of effector, namely, pact(). Similarly, the maximal activity known as the saturation occurs in the zero-effector limit, namely, pact(0). These important summary variables can be calculated as

pactmax=limc0pact=11+e-βε (6)

and

pactmin=limcpact=11+e-βεKc2, (7)

where Kc=KA/KI. With the parameters used in Fig. 5(A), pactmin and pactmax are separated by about three orders of magnitude, and thus Kdeff can also only be tuned across three orders of magnitude, as shown in Fig. 5(B). The restricted Kdeff range can have important consequences. For example, consider a bistable system with a stability curve such as that shown in Fig. 6. The x-axis is Kdeff and the y-axis tracks the steady state concentration of some protein A. The range of Kdeff resulting in bistability might not be fully accessible, depending on the baseline Kd value.

Figure 6:

Figure 6:

Allosteric tuning restricts the range of accessible Kdeff. The dark blue curve shows the steady-state values of activator concentration A, with filled circles indicating stable fixed points and open circles indicating unstable ones. The light blue and light green shaded regions represent two example ranges of accessible Kdeff=Kd/pact(c) values, obtained by varying the effector concentration c for two proteins with different DNA binding affinities (and thus different baseline Kd values). For the blue region, Kd=3.2×10-2μM=10-1.5μM; for the green region, Kd=1μM. The parameters for the auto-activation stability curve are: γ=1/min, r0=0.1μM/min, r1=1μM/min, r2=20μM/min,ω=100. If Kd itself were tunable, the full positive Kdeff axis would be accessible. However, tuning the biologically relevant parameter c imposes constraints on the range of achievable Kdeff. The light blue range does not intersect the bistable region, while the light green range does, illustrating that bistability may not be fully accessible due to the functional dependence of Kdeff on pact(c).

Our fundamental goal is to reconsider the classic stability analysis for a broad array of different regulatory architectures in light of the MWC model for transcription factor activity described above. For example, as seen in Fig. 6, tuning the Kd for the simple auto-activation circuit yields two stable fixed points. In the first section, we examine this circuit by modulating the concentration of effector molecules, which tunes the concentration of active transcription factors. From this simple genetic circuit, we then turn to the ubiquitous mutual repression switch, well known not only as a key part of the repertoire of natural genetic circuits as shown in Fig. 7, but also as one of the classic examples of synthetic biology [29]. Both auto-activation and mutual repression can exhibit bistable dynamics [29, 42, 49, 50]. We examine the conditions for bistability as well as the relaxation dynamics in each. Finally, we turn to three-gene feed-forward loops, in which an input gene regulates expression of another both directly and indirectly through regulation of an intermediary. Depending on the precise architecture, these circuits exhibit unique time-varying behavior in response to pulsing effector signals [5153].

Figure 7:

Figure 7:

Mutual repression is ubiquitous in cellular decision making. (A) The bacteriophage lambda switch that mediates the phage decision to become a lysogen in the bacterium or engage in lysis through mutual repression of cI and cro (and some autoactivation of cI as well). (B) In hematopoietic development, mutual repression between different genes have been suggested to ensure the switch-like adoption of alternate cellular fates. Diagram adapted from [54] and [55]. (C) Mutual repression in the early fruit fly gene network has been associated with the emergence of discrete domains of gene expression. Diagram adapted from [56] and [57].

III. BISTABILITY IN GENETIC CIRCUITS

Different gene networks serve different biological functions. Among the most important classes of networks are those that yield bistability. Bistability refers to the situation in which, for a given set of parameters, a system can exist in one of two possible stable steady states. Such a feature is biologically significant. For example, often the expression level of one protein determines a cell’s fate. To obtain cells with different functions, there might be some cells with high concentrations of the protein of interest and some cells with low concentrations, requiring a bistable system regulating the protein in question. Broad computational surveys have shown that switch-like, bistable behavior can emerge from a wide range of simple biochemical network architectures [58]. Here, we analyze the two simplest and most ubiquitous gene circuits that produce bistability [44], considered now through the new lens of how effector molecules modulate the dissociation constants of regulatory proteins binding to DNA. This is in contrast to the conventional setting in which the tuning strategy is simply to modulate the binding parameters within thermodynamic models [4244, 49], rather than those parameters being naturally tuned through the action of signaling molecules.

A. The auto-activation regulatory motif

Auto-activation circuits, in which a gene product enhances its own transcription, are among the simplest genetic regulatory motifs and are capable of generating bistable behavior [42]. Such motifs have been studied extensively in both synthetic and natural biological systems. In vitro synthetic networks have demonstrated robust switching between high and low expression states under controlled biochemical conditions [59], and positive feedback loops have been implicated in natural processes like cell differentiation, where they convert graded input signals into binary gene expression responses [60]. Motivated by these observations, we now turn to a theoretical analysis of the auto-activation switch. The architecture we study is shown schematically in Fig. 2, where a transcription factor activates its own production, forming a feedback loop.

The states included in the states and weights diagram of Fig. 2 reveal that an essential element of achieving bistability is the cooperative binding of more than one activator on the regulatory region of the gene’s promoter. Given these states, weights, and rates, we can now write the kinetic equation governing the dynamics of the auto-activation system as

dAdt=-γA+r0+r1(2pactcAKd)+r2ω(pactcAKd)21+2pactcAKd+ω(pactcAKd)2, (8)

where γ is the degradation rate of protein A,ri is the production rate when i activators are bound, ω is the cooperativity of activator binding, and Kd is the biophysical dissociation constant specific to a gene and a transcription factor. The effect of allosteric regulation is included in pact(c), as we introduced earlier. This probability modifies the active transcription factor concentration from A to pact(c)A.

Note that we could alternatively describe this auto-activation switch by explicitly considering all possible regulatory states with bound and unbound RNA polymerase (RNAP), and explicitly defining an energy for interaction between activators and RNAP. Appendix A1 demonstrates, however, that this representation is equivalent to Eqn. 8, with the rates ri, dissociation constant Kd, and cooperativity ω implicitly dependent on polymerase concentration, the strength of polymerase binding to the DNA, and the strength of interaction between polymerase and activator. The discussion throughout this paper will thus remain in the equivalent coarse-grained realm, as depicted in Fig. 2, and effectively focus on polymerase-bound states.

It is helpful to write our dynamical equation in dimensionless form. To do so, we non-dimensionalize the system by using 1/γ as the unit of time and Kd as the unit of concentration. Within the pact paradigm, we can write the dynamical equation in dimensionless form as

dAdt=-A+r0+r12pactA+r2ωpactA21+2pactA+ωpactA2, (9)

where t=γt, A=A/Kd and ri=ri/γKd.

At a given effector concentration, we can represent the gene expression dynamics that unfold through a phase portrait as in Fig. 8. The points of intersection of the production and degradation curves correspond to steady state activator concentrations. Fig. 8 highlights a system that can stabilize to one of two possible states with a high (Ahigh) or low (Alow) activator concentration. Depending on the initial concentration of activator protein, the system will converge to one of these stable points. At the unstable steady state Aunstable, only a small perturbation is needed for the system to evolve toward one of the two stable steady states.

Figure 8:

Figure 8:

Plot of production and degradation rates for an auto-activation switch as a function of activator concentration. This figure illustrates the competition between the production and degradation terms for a system with rate constants r0=0.1, r1=1, r2=20, and cooperativity ω=10 at effector concentration c=25μM. Intersections of the curve denote stable (filled nodes at low concentration Alow and high concentration Ahigh) and unstable (unfilled node Aunstable) fixed points. The vectors denote a phase portrait representing the direction and magnitude of change in activator concentration as a function of activator concentration itself.

We can now qualitatively visualize how the dynamics of auto-activation transform at different effector concentrations. Specifically, as we will see explicitly in the next section, for each effector concentration c we generate a phase portrait analogous to that shown in Fig. 8. We then determine the number of stable and unstable fixed points and their corresponding activator concentrations. Performing this analysis as we tune effector concentration yields the bifurcation curve shown in Fig. 9. At a low effector concentration, activators are more likely to be found in their active configurations, enhancing gene expression such that the system always stabilizes to a state with a high concentration of activator. The magnitude of this concentration approaches a maximal value defined by the rate r2 for activated protein production. As the effector concentration increases, the system becomes bistable, allowing a bimodal distribution in protein concentrations for an ensemble of cells [50].

Ultimately, at sufficiently high effector concentration, activators are sequestered in their inactive configuration such that the system can only stabilize to a state with low activator concentration. The magnitude of activator expressed is then largely defined by the basal rate of production without bound activator, i.e., r0. Tracking the system’s corresponding production and degradation curves through a series of snapshots in Fig. 9, we observe that these shifts between bistable and monostable dynamics emerge because the increasing effector concentration shifts the system production curve toward higher activator concentration. Expressed differently, as effector concentration increases, a higher activator concentration is necessary to achieve a given production rate.

Note that the threshold at which the system switches from one stable state to another differs when increasing and decreasing the effector concentration. If the system initially contains a high concentration of activator before tuning, a higher concentration of effector is necessary to switch to the low activator state than is required when decreasing effector concentration for a system with initially low activator concentration. Considering Fig. 9 again, the higher threshold corresponds to the maximum effector concentration at which the system is bistable, and the lower threshold to the minimum concentration at which the system is bistable. Fig. 10 illustrates this phenomenon of hysteresis more explicitly, overlaying the previously discussed bifurcation diagram, and showing how the threshold at which the blue stable equilibrium trajectory switches from high to low activator concentration differs from the threshold for the orange trajectory tracing the switch in reverse.

Figure 10:

Figure 10:

Plot of stable state evolution, exhibiting hysteresis under different trends for effector concentration. The blue curve plots how the stable state evolves with increasing effector concentration from an initial high concentration of activator. The orange dashed curve plots the change in stable activator concentration starting from an initially low level as effector concentration decreases. Comparing to the bifurcation diagram (grey) previously shown in Fig. 9, the history of steady-state evolution determines the threshold concentration of effector at which the system switches state, highlighting hysteretic behavior.

We are particularly interested in characterizing the conditions for bistability to occur at effector concentrations c, as well as how the bistable regime (if it exists) responds to changes in parameters such as production rates and cooperativity. While previous studies have analyzed bistability in auto-activation circuits using thermodynamic models without effectors [42, 61], our aim is to extend this to include effector dependence explicitly and systematically explore how the bistable region evolves across a broader parameter space.

1. Regimes of bistability

We now analyze the ways in which bistability emerges in the context of effector-mediated genes. One could begin with simply a fixed set of parameters and asking whether the system is bistable. In other words, can the concentration of gene product A settle at either a high or low steady state depending on the initial condition, thereby functioning as a binary switch?

The question becomes more nuanced, however, when taking the role of effector molecules into account. Some parameters are more intrinsic and less readily tunable than others. For instance, molecular constants such as binding affinities or cooperativity are typically encoded in the system’s molecular architecture. In contrast, cells can regulate the concentration of effector molecules relatively easily and rapidly, either through controlled expression, import/export mechanisms, or degradation pathways. Given this, the most relevant questions to ask are (i) under what constraints on the system’s intrinsic parameters can bistability be achieved for at least some range of effector concentration c, and (ii) how wide is that range? Indeed, even if the effector concentration is a more accessible control knob, the cell is unlikely to operate at a single precise value, if only due to molecular noise and environmental fluctuations. It is therefore biologically relevant to assess not just whether bistability is possible, but whether it is robust to such fluctuations in effector levels.

We observe that even under the idealized assumption that the cell could freely choose its effector concentration, bistable behavior arises only within a restricted range of parameter values. This is shown numerically in the red regions of Fig. 11. To investigate the conditions under which multiple steady states are possible, we also derive analytical bounds in parameter space in Appendix B1. By setting dA/dt=0 and rewriting the resulting expression in standard polynomial form, we can infer the number of positive real roots using classical results (i.e., Descartes’ rule of signs) that relate sign changes in the coefficients of a polynomial to the number of positive roots. This leads to a necessary condition on the parameters of the system: for bistability to be possible for at least one value of the effector concentration, the system must satisfy the inequalities

ωr22>1+e-βε, (10)
2r1<1+e-βεKc2, (11)
ωr2>4r1. (12)
Figure 11:

Figure 11:

Range of effector concentrations for which the system exhibits bistability. The red shaded region indicates bistability. Outside this region, the system is monostable. Unless otherwise specified, parameters are fixed at: ω=7.5, r0=0.1, r1=1, and r2=20. (A) Bistable concentration range as a function of cooperativity ω, varied over the interval ω10-6,106, corresponding to interaction energies between the two activators from approximately +14,kBT to -14,kBT, since ω=e-βεint. The dotted line shows the analytical lower bound on the minimal cooperativity required for bistability. (B) Bistable concentration range as a function of r0, sampled over the interval r010-6,r1. (C) Bistable concentration range as a function of r1, sampled over the interval r1r0,r2. (D) Bistable concentration range as a function of r2, sampled over the interval r2r1,106. The rate parameters are varied under the constraint of the auto-activation condition r0r1r2, which ensures that the production rate increases with the number of bound activators.

Notably, these conditions do not depend on r0.

In Fig. 11, we systematically modulate the four parameters of the auto-activation system: the strength of cooperative binding ω, the protein production rate r0 in the absence of activator binding, the protein production rate r1 when one activator is bound to the DNA, and the protein production rate r2 when two activators are bound. Fig. 11(A) shows the consequences of varying the cooperativity parameter ω, sampling values from 10−6 to 106 to represent systems with positive cooperativity (ω>1), systems that are non cooperative (ω=1), and systems with negative cooperativity (ω<1). Recall that cooperativity describes the energy of interaction between two bound activators, εin, and therefore can be written as ω=e-βεint. The range of cooperativity shown in the figure thus corresponds to interaction energies ranging from εint-14kBT to εint14kBT, encompassing a broad and biologically relevant spectrum of interaction strengths [46, 47, 49].

Fig. 11(B), (C), and (D) explore the effects of varying the rates r0, r1 and r2, respectively. To do so, we impose constraints on these parameters to ensure that the system remains within the auto-activation regime, as defined in Appendix E. Specifically, we require that the production term in Eqn. 9 remains a monotonically increasing function of A, such that A consistently acts as an activator across all concentrations. This condition imposes the inequality r0r1r2, which we enforce throughout our analysis when varying r0, r1 and r2.

We note that the necessary criteria for bistability derived in Eqns. 1012 do not impose any effective constraint when tuning either r0, r1, or r2 for the set of parameters chosen in Fig. 11. Specifically, the predicted lower bound on r1 exceeds the upper limit on r1 allowed by the auto-activation constraint (i.e., r1r2). Similarly, the threshold for r2 above which bistability is possible lies below r1, meaning the system no longer behaves as a strictly auto-activating unit. Indeed, we observe that for parameter values consistent with the auto-activation regime, when either r1 or r2 is varied individually in Fig. 11(C) and (D), the system fails to exhibit bistability only in a narrow region where r1 approaches r2. Note, however, that this behavior is not universal. For different parameter values, the relative positioning of these thresholds may change, and the necessary conditions specified by Eqns. 1012 could become more explicitly informative. The conclusions drawn here are therefore specific to the parameter set used in this analysis.

We show that the parameters r2 and ω play similar roles in shaping the system’s ability to exhibit bistability. As seen in Fig. 11(A), there exists a critical threshold of cooperativity ω below which the system is strictly monostable, indicating that a minimal level of nonlinearity is required for bistability. The analytical bounds derived in Eqns. 10 and 12 accurately capture this threshold. Likewise, Fig. 11(D) demonstrates that r2 must exceed a minimum value to support bistability; below this threshold, the system remains monostable for all effector concentrations.

Increasing r2 strengthens the contrast between states with high and low activator steady state concentrations in a positively cooperative system (ω>1), and as a result, the system becomes unstable for a broader range of effector concentrations. Therefore, as either ω or r2 increases, the region of effector concentrations that supports bistability broadens. This leads to a wider hysteresis zone and expands the range over which the system can toggle between high and low steady states under a fixed set of parameters. The region of bistability in effector concentration space is displaced toward higher concentrations, where the activation probability pact(c) approaches the leakiness limit. This reflects the fact that the effector destabilizes the activator by decreasing its effective DNA binding affinity. In this sense, increasing effector concentration counteracts the effect of high ω and r2, which tend to promote high expression levels of A. These two opposing effects—activation-driven amplification and effector-driven destabilization—create a balance that enables bistability.

Due to the system’s leakiness, complete inactivation of the activator is never achieved, even at high effector concentrations. As a result, for sufficiently large values of ω and r2, bistability can occur for all concentrations above a finite lower bound cminω,r0,r1,r2. This behavior corresponds to a limited region in parameter space where the system remains bistable at large c, as illustrated by the blue and green dotted lines in Fig. 11(A) and (C). In Appendices B2 and D, we derive analytical bounds that predict the onset and breakdown of this upper-unbounded bistable regime.

Beyond a certain point and with fixed values of effector concentrations, further increases in either parameter have the opposite effect. When ω becomes too large, activators bind excessively tightly to the DNA, effectively locking the system into a high-expression state. Similarly, if r2 becomes too large, the system favors high levels of gene expression, and bistability is again lost (we analyze this transition using two-dimensional numerical sweeps and supporting analytical arguments in Appendix D). This behavior is a direct consequence of how the effector enters the model. If, instead of varying the effector concentration, we varied an effective dissociation constant Kdeff, the bistable region would always remain bounded within a finite range of values.

This behavior contrasts with the effect of increasing r0 or r1. As shown in Fig. 11(B) and (C), higher values of either parameter reduce the width of the bistable region, and beyond a critical threshold, bistability disappears entirely. We can therefore infer that, in a positively cooperative system (ω>1), elevated values of r0 and r1 undermine the system’s ability to function as a bistable switch.

Interestingly, we observe bistability for values of the cooperativity parameter ω that are less than one, as seen in Fig. 11(A). This can be reconciled in two complementary ways. First, we can define an effective cooperativity for the system, given by ωeff=ωr2/2. From the necessary conditions for bistability derived in Eqns. 10 and 12 we find that ωeff>1 is required for bistability. While this condition is necessary but not sufficient, it suggests that ωeff captures the functional cooperativity of the system more accurately than ω alone, as it incorporates both the interaction between bound activators and the maximal rate of activator production. We can also reconcile our bistable results containing values of ω less than one by examining the effective Hill coefficient of the steady-state input–output function. As shown in Appendix G, bistability is observed only when the effective Hill coefficient exceeds one. This reinforces the idea that the system can exhibit bistability even when ω<1, provided the overall nonlinearity—quantified either by ωeff or the effective Hill coefficient—is sufficiently strong.

2. Comparing Hill function and thermodynamic formulations

Thus far our paper has employed a thermodynamic formulation of the auto-activation switch, rooted in the principles of statistical mechanics. In previous discussions of gene circuits, however, a phenomenological Hill function is the predominant method to model these dynamics (e.g., [29, 30]). We note the fascinating origins of the Hill function in the work of Archibald Hill. More than 100 years ago, Hill wrote down a description of the binding between oxygen and hemoglobin that we now know as the Hill function, which he wrote as

pboundx=xKn1+xKn, (13)

where x is the concentration of O2 and K is its allied dissociation constant.

As Hill himself tells us, this functional form provides a summary of the occupancy of hemoglobin (the example he used, though it has been applied much more broadly). If we think of the huge topic of input-output functions in biology, then the kind of characteristics embodied in the Hill approach include a representation of leakiness (the amount of output even in the absence of input, pbound(0)), dynamic range, EC50 (the input concentration at which the output reaches half its maximum, EC50=K) and the sensitivity as measured by the slope of the input-output curve (usually in logarithmic variables) at the midpoint. It is instructive to hear Hill commenting on his thinking: “My object was rather to see whether an equation of this type can satisfy all the observations, than to base any direct physical meaning on n and K [62].” He goes further in his 1913 paper noting [63] “In point of fact n does not turn out to be a whole number, but this is due simply to the fact that aggregation is not into one particular type of molecule, but rather into a whole series of different molecules: so that equation (1) (our Eqn. 13) is a rough mathematical expression for the sum of several similar quantities with n equal to 1, 2, 3, 4 and possibly higher integers.” We think it important to remember that the Hill function is a phenomenological description of equilibrium binding that assumes certain states of occupancy are irrelevant, or alternatively, that bunches all of the states of occupancy into one effective non-integer state of occupancy.

In comparing and contrasting the two treatments of transcription factor binding, we find that they can yield different results. Hill functions approximate away some thermodynamic details (such as cooperativity) and describe the dynamics as

dAdt=-A+r0+r2pactcAn1+pactcAn, (14)

where n is the Hill coefficient. We observe that the functional form of the production term is similar to that of the thermodynamic model. In Appendix F, we further show that the probabilities of each state of transcription factor binding are similar between the Hill function and the thermodynamic model. Nevertheless, the steady states predicted by Hill functions surprisingly differ from the thermodynamics prediction in two principal ways.

First, Hill functions can contradict the thermodynamic model in their prediction of bistability, as shown in Fig. 12(A). We compute bifurcation curves—reporting the fixed points of the system—using both Hill and thermodynamic formulations under identical baseline parameters r0=0.1 and r2=2. For the thermodynamic model, we additionally explore a wide range of intermediate activation strengths r1[0.1,2] (ensuring r0r1r2) and cooperativity values ω10-4,104. The Hill function model with Hill coefficient n=2 predicts a bistable regime at low c, highlighted in light orange. In contrast, across all combinations of r1 and ω, the thermodynamic model exhibits no bistability in this same region, demonstrating that the Hill formulation can introduce bistability not permitted by physical thermodynamic constraints.

Figure 12:

Figure 12:

Comparison between Hill function and thermodynamic treatments of bistability in the auto-activation switch. (A) The Hill function model predicts bistability even when it is forbidden in the thermodynamic formulation by the Descartes bounds. Shared parameters are r0=0.1, r2=2. Two illustrative thermodynamic curves are shown here in blue as the two remaining parameters, r1 and ω, are varied. Regardless of the choices of r1 and ω, the thermodynamic curves exhibit no bistability in the bistable region of the Hill function. One thermodynamic curve shown here have parameters r1=0.1, ω=2; the other has parameters r1=1, ω=100. (B) Hill function fails to capture the tunability of the EC50 in the thermodynamics formulation. By changing the parameters r1 and ω, the switch from high to low concentrations of A can be tuned to occur at different effector concentration c. Shared parameters are r0=0.1, r2=10. The left thermodynamic curve shown here has parameters r1=1, ω=1. The left thermodynamic curve shown here has parameters r1=1, ω=50. Nevertheless, all Hill function curves switch at the same effector concentration, regardless of Hill coefficient n, with the evidence shown in Fig. 31 in Appendix F. The Hill functions shown in (A) and (B) have n=2.

Second, the Hill function model does not capture the variability in the effector concentration at which the switch is flipped—that is, the critical c value at which the system transitions from a high to a low steady-state level of A. In Fig. 12(B), we again compute the bifurcation curves with r0=0.1 and r2=10 for both the thermodynamic and the Hill model. The thermodynamic model predicts that this threshold depends sensitively on both r1 and ω, allowing the switch-like transition to occur over a broad range of effector concentrations. In contrast, all Hill function curves, regardless of Hill coefficient, undergo the transition at approximately the same c value, shown by the orange circle in Fig. 12(B). We show Hill function bifurcation curves with other Hill coefficients in Appendix F to demonstrate this fact. From this point of view, the thermodynamic model predicts a more flexible switch than the Hill function model. Given the relatively light computational requirements for the model systems discussed here, as opposed to more complex and high-dimensional gene-interaction networks [64], we will proceed with the thermodynamic formulation for all other gene circuits considered in the paper.

3. Timescale for stabilization

In addition to the question of steady states, it is interesting to examine the timescale for an inducible genetic circuit such as the auto-activation switch to reach steady state. In doing this analysis we remind the reader that our treatment assumes separation of time scales between the dynamics of effector binding and allosteric transitions, and the dynamics of the relaxation to steady state. As a result, the binding of effector and the allosteric state of the activator rapidly equilibrate as the activator concentration changes dynamically.

As shown in Fig. 13(A), the relaxation timescale depends strongly on the initial activator concentration A0. In particular, as seen in the figure, the relaxation timescale to steady state increases as the initial value of activator concentration, A0, gets closer to the unstable fixed point. The increased time to reach steady state near the unstable point reflects the typical behavior of positive feedback systems near bifurcations, where the system lingers near the threshold before switching [65]. However, for large initial concentrations of A, the dimensionless relaxation timescale approaches 1, as shown by the line at t=1 in Fig. 13(A). This corresponds to the protein degradation timescale 1/γ, which serves as the unit of time in our nondimensionalization. This also corresponds to the relaxation timescale of a simply regulated gene, activated solely by an upstream transcription factor. We find that auto-activation consistently delays the response compared to simple regulation, although the two converge in the high-A limit. Within the Hill-function framework, positive autoregulation has likewise been shown to slow gene circuit response times [66], in contrast to the accelerating effect of negative feedback [67].

Figure 13:

Figure 13:

Dynamics of the auto-activation switch. The parameters of the system are fixed at: ω=7.5, r0=0.1,r1=1, r2=20, and c=210-5M. (A) Time evolution of the activator concentration for various initial conditions. Each black curve represents a trajectory A(t) starting from a different initial condition A0. Dashed horizontal lines indicate the stable (Ahigh and Alow) and unstable (Aunstable) fixed points. (B) Relaxation timescale obtained from exponential fits to the trajectory A(t) as a function of the initial concentration A0. Vertical lines indicate the positions of the steady states, while horizontal dashed lines mark the timescales associated with small perturbations around each fixed point, computed from the inverse slope f(A)-1 of the function f defined in Eqn. 16. The solid horizontal black line corresponds to the reference timescale t=1.

To further interpret these trends, we apply linear stability analysis to Eqn. 9, as is commonly done in the study of dynamical systems 68. Linearizing around a point Ai with A(t)=Ai+δA(t) and expanding the righthand side, that we denote f(A), to first order yields

f(A(t))fAi+fAiδA(t) (15)

for sufficiently small δA(t). The function f is given by

fA=-A+r0+r12pactA+r2ωpactA21+2pactA+ωpactA2. (16)

Therefore, the linearized equation governing the evolution of a small perturbation δA(t) around the point Ai becomes

dδAdt=fAi+fAiδA. (17)

If Ai is a steady state, then fAi=0, and the equation reduces to exponential relaxation with dimensionless timescale fAi-1. Stable fixed points, denoted Alow and Ahigh, satisfy fAlow/high<0, so perturbations decay over time. For unstable fixed points, where fAunstable>0, small deviations grow exponentially and drive the system away from the fixed point.

However, if Ai is not a fixed point, then fAi0 and the solution to Eqn. 17 does not represent convergence to a steady state. Instead, it predicts that δA asymptotes to a finite offset fAi/fAi, breaking the assumption of linearity (δA(t)0). In this case, the derivative fAi still encodes the local response to small perturbations, but only describes the dynamics while deviations remain small. Therefore, the timescale f(A)-1 is most meaningful in the vicinity of fixed points, even if it can still provide qualitative insights elsewhere.

These insights are reflected in Fig. 13(B), where we numerically compute the relaxation timescale as a function of the initial condition A0. For initial values near the stable fixed points, the timescale closely follows the linear prediction fAlow/high-1. In contrast, when the initial condition lies near the unstable fixed point, the system initially diverges exponentially and only later relaxes nonlinearly to a stable state. This leads to an overall increase in the time to reach steady state, consistent with the trajectories displayed in Fig. 13(A) (see Appendix H for the precise numerical procedure used to compute these timescales). Together, these results reveal that the stabilization timescale in auto-activation circuits is not fixed but depends sensitively on initial conditions, particularly near unstable fixed points—highlighting the importance of considering nonlinear transient dynamics when predicting response times in bistable gene networks.

B. The mutual repression regulatory motif

Many regulatory circuits in prokaryotes and eukaryotes alike are mediated by the interaction between two genes that repress each other as shown in Fig. 14 [54, 55, 69]. Indeed, one of the signature achievements of the early days of synthetic biology was the consideration of a mutual repression switch in bacteria whose behavior was reported by the use of fluorescent proteins and controlled by a small molecule inducer [29]. As the name suggests, two genes R1 and R2 mutually repress each other. To simplify our analysis we assume that the two genes share the same degradation rate γ and production rate r.

Figure 14:

Figure 14:

The mutual repression regulatory circuit. (A) Schematic of the operation of the circuit. When the gene for repressor 1 is expressed, the resulting protein downregulates the expression of the gene for repressor 2. Repressor 2, in turn, downregulates the expression of the gene for repressor 1. (B) Thermodynamic states, weights, and rates for expression of repressor 1. In our model, a repressor can bind non-exclusively at one of two possible sites within the target promoter region to suppress gene transcription. The parameter ω2 denotes the cooperative strength between two bound repressors R2. (C) Thermodynamic states, weights, and rates for expression of repressor 2. The states and weights for the regulation of the promoter responsible for the production of repressor 2 are analogous to those shown for repressor 1. However, the dissociation constant of repressor 1 in this case is given by K1, and the cooperativity term for the interaction of two repressor 1 molecules bound to the DNA is ω1.

By inspecting the states and weights diagrams of Fig. 14, we can express the dynamics of repressor 1 expression as

dR1dt=-γR1+r1+2pactc2R2K2+pactc2R2K22ω2, (18)

where we have defined an effector with concentration c2 that regulates the activity of repressor 2. Similarly, the dynamics for R2 expression are described by an equation analogous to Eqn. 18 but swapping R1 and R2, given by

dR2dt=-γR2+r1+2pactc1R1K1+pactc1R1K12ω1, (19)

where we have also defined an effector with concentration c1 that regulates the activity of repressor 1. The production term attributes a rate r to the state with no bound repressors as shown in Fig. 14. As in auto-activation, this analysis only accounts for the presence of RNA polymerase implicitly, which we discuss further in Appendix A2.

We can write dimensionless forms of these kinetic equations by transforming t=γt, ri=Ri/K2, and r=r/γK2, and obtain

dR1dt=-R1+r1+2pactc2R2+ω2[pactc2R2]2, (20)
dR2dt=-R2+r1+2pactc1R1K+ω1pactc1R1K2. (21)

As a reminder, ω1 and ω2 are the cooperativity for R1 and R2, respectively, c1 and c2 are the inducer concentrations for each repressor, and K=K1/K2 is the ratio of dissociation constants. Note that the equations above assume that each repressor responds to induction with the same inducer-protein binding chemistry, obeying the same activation probability function pact, but responding to potentially different inducer concentrations c1 and c2. In the most general case, however, the two probability functions could differ.

This formulation now provides a two-dimensional knob for tuning the concentrations of the inducers corresponding to R1 and R2. The system may then be tuned to follow arbitrary trajectories in the two-dimensional parameter space spanned by c1 and c2. Frequently, this tuning generates non-trivial bifurcation curves.

To build intuition about this system, we first consider a scenario in which only one of the repressors may be induced. Fig. 15(A) plots the resulting bifurcation diagrams for R1 and R2 steady-state expression. At low inducer concentrations, the system is bistable and can either evolve to favor R1 (red) or R2 (blue) expression. Fig. 15(B) shows an example phase portrait within this low induction regime that depicts vector field flows through expression space toward these fixed points. At a sufficiently high inducer concentration, R1 expression can no longer be maintained, and the system transitions to a monostable regime in which only the state favoring R2 expression survives. A higher inducer concentration amplifies the expression of R2 up to its production limit. Fig. 15(C) depicts a phase portrait in this regime.

Figure 15:

Figure 15:

Bifurcation diagrams and phase portraits for mutual repression in the presence of a single inducer regulating the activity of repressor R1 (with fixed parameters K=K1/K2=1, r=2 and ω1=ω2=7.5). (A) Bifurcation diagrams tracking steady-state R1 and R2 expression as inducer concentration increases. The red curves track expression in the steady state for which R1>R2, the blue curves track expression in the steady state for which R2>R1, and the unfilled nodes denote unstable saddle points. (B) Phase portrait at a low inducer concentration, demonstrating bistable dynamics between two possible stable states favoring either R1 (red) or R2 (blue). Steady states occur at the intersections of the nullclines as shown. (C) Phase portrait at a high inducer concentration, for which the system is monostable to favor R2.

We now turn to a two-dimensional setting for allosteric regulation, with distinct inducers downregulating the activity of each repressor. Fig. 16(A) plots the phase diagram for the dynamics of mutual repression at different combinations of inducer concentrations. The dark green region denotes the region of parameter space with bistable dynamics, with monostable behavior elsewhere. Note the symmetry of this phase diagram with respect to the diagonal c1=c2. This is unsurprising given the condition K=1 which amounts to saying that the two different repressors bind with the same dissociation constant. This condition will be discussed further in the following subsection.

Figure 16:

Figure 16:

Phase diagram and bifurcation diagrams for mutual repression when each repressor’s activity is regulated by an inducer (with fixed parameters K=K1/K2=1, r=2 and ω1=ω2=7.5.) (A) Phase diagram in which the dark green region denotes the inducer concentration regime for which the system has bistable dynamics. Trajectories (B) and (C) follow the shifts in dynamics moving through phase space along different paths, and are shown in the corresponding panels. (B) Bifurcation diagrams for R1 and R2 steady-state expression as the inducer concentrations controlling each repressor increase at the same rate. (C) Bifurcation diagrams as inducer concentrations increase at different rates.

We can now consider how dynamics evolve as inducer concentrations change at different rates. Fig. 16(A) considers different “protocols” for simultaneously varying the concentration of each inducer. For example, in Fig. 16(B), we examine a protocol in which the inducer concentration for each gene increases at the same constant rate (corresponding to the red trajectory shown in Fig. 16(A)). We then plot the corresponding bifurcation diagram. As inducer concentration increases, the scope of the bistable switch shrinks in expression space, with the stable states continuously approaching each other. At c1=c23.210-6M, the system then undergoes a pitchfork bifurcation to monostable expression, stabilizing at increasingly high concentrations of both R1 and R2.

We could follow an alternative trajectory (denoted by the purple arrow in Fig. 16(A)) through parameter space such that the inducer concentrations evolve at different rates, in this case with c1 increasing more rapidly than c2. This purple trajectory then passes in and out of the green bistable region several times. Fig. 16(C) plots the corresponding bifurcation diagram tracking stable and unstable steady states as the inducer concentrations increase, and demonstrates the switches between bistability and monostability. Note that while the intermediate monostable regime favors R1 expression, the monostable regime at later times favors R2 instead, reflecting the swap in the dominant inducer concentration that occurs between these time periods. Thus, by modulating the induction dynamics of each repressor we can access a broad range of dynamical responses in repressor concentrations.

1. Conditions for bistability

We now study how the size, shape, and symmetry of the bistable region observed in the phase diagram of Fig. 16(A) varies with system parameters. Specifically, in Fig. 17 we first identify three distinct geometries for the bistable region in the (c1,c2) plane, each reflecting different limiting behaviors of the inducers.

Figure 17:

Figure 17:

Bistability regimes in mutual repression as a function of the relative DNA-binding affinities of repressors R1 and R2. Colored regions denote distinct bistable phase space geometries, defined by whether bistability occurs for a finite window of c1, a finite window of c2, or the presence of both at sufficiently small concentrations. Otherwise, the system is never bistable for any concentration (c1,c2) in the interval [10−7M, 10−4M] considered. These geometries evolve as the binding affinity ratio K=K1/K2 is varied, with ω1=ω2=7.5 and r=2 held constant.

The first geometry (marked orange in the legend of Fig. 17) corresponds to a situation where bistability is present only when c1 lies within a finite interval c1min,c1max for a given c2. In this case, both bounds of the interval are strictly positive and finite, and c2 must be smaller than a certain threshold. The limiting factor is therefore c1, which must be finely tuned to enable bistability, while c2 simply needs to remain below a critical level. Nevertheless, when decreasing c2 the bistable interval in c1 broadens, showing that lower c2 expands the range of c1 values supporting bistability.

A second geometry (marked red in the legend of Fig. 17) mirrors the first, but with the roles of c1 and c2 reversed. Here, bistability is present only when c2 lies within a finite interval, while c1 must remain below a threshold. In this case, c2 becomes the more constrained parameter to tune. In contrast, a third geometry (marked blue in the legend of Fig. 17) arises when bistability is supported broadly for small enough values of both c1 and c2, with no lower bound required for either parameter. Although the bistable region remains upper-bounded, neither inducer is particularly limiting, with a broad range of concentrations allowing for bistability as long as neither becomes too large.

Fig. 17 illustrates how these phase space geometries for bistability depend on system parameters. We vary the relative DNA-binding affinity of the repressors by tuning K=K1/K2, while keeping the basal production r fixed, and the cooperativities fixed and equal (ω1=ω2). We observe that the most permissive bistable region—broad in both c1 and c2, for the range of concentration studied—occurs when K1, corresponding to a symmetric system where both repressors bind with comparable affinity. As K decreases (i.e., R1 binds more tightly than R2), the phase space becomes increasingly constrained in c1. If c1 is too low, R1 remains fully bound and strongly represses R2, suppressing bistability. Conversely, if c1 is too high, R1 becomes fully unbound, leaving R2 unrepressed and again eliminating bistability. Only an intermediate range of c1 supports bistability in this regime, while c2 simply needs to be small enough. A similar scenario occurs when K becomes large (i.e., R2 binds more tightly than R1), but with c1 and c2 effectively reversed. Eventually, c2 is no longer sufficient to counteract the tight binding of R2, and above a critical value of K, bistability disappears entirely from the parameter space. Tuning the cooperativity parameter ω2 produces similar qualitative changes in the bistability phase space as varying the relative binding affinity K.

In both cases, we observe the same sequence of transitions in the structure of the bistable region, as shown in Appendix I3. High values of ω2 reflect strong cooperative binding of repressor R2 to the DNA, meaning that binding becomes more favorable when two repressors are present. This effect mirrors what happens when increasing K: if K>1, then K1>K2, implying that higher concentrations of R1 are required for effective DNA binding compared to R2. As a result, increasing K effectively enhances the influence of R2, analogous to increasing ω2. Conversely, tuning ω1 affects the system similarly, but with the roles of c1 and c2 reversed. The effects of cooperativity are examined in greater detail in Appendix I3.

From Fig. 17, we note that extreme (low or high) values of K tend to suppress bistability, as they strongly favor one repressor over the other across all concentrations. In contrast, high values of ω1 or ω2 amplify repression mostly when the corresponding repressor is present at high concentration. As a result, the system requires finely tuned inducer concentrations to counteract this cooperative imbalance and sustain bistability, effectively constraining the range of inducer concentrations for which the system can be bistable—as shown in Appendix I3.

We next explore how the interplay between cooperativity and production rate controls the presence and extent of bistability in mutual repression systems, focusing on the symmetric case K=1 shown in Fig. 18. We note that, in contrast to parameters like ω1, ω2, or K, tuning the rate parameter r does not break the symmetry between the two genes, as it controls the production rate of both repressors equally.

Figure 18:

Figure 18:

Bistability regions in mutual repression as a function of cooperativity and production rate. In both panels, the two repressors have equal DNA-binding affinities, with K=1. We determine the boundary in (ω1,ω2) space that separates regions where bistability is possible for some (c1,c2) from those where it is impossible. (A) Example for r¯=1. (B) Rather than showing the full bistability map, we report only this boundary curve for increasing r¯.

In Fig. 18(A), we classify parameter combinations in the (ω1,ω2) plane for r=1 according to whether the system exhibits bistability for any inducer concentrations c1,c2. Fig. 18(B) shows how the boundary between monostable and bistable regimes shifts with r, separating regions where bistability is either inaccessible or achievable for at least some inducer pairs. We observe that the production rate, when coupled to cooperativity, plays a critical role in enabling bistability—much like in the auto-activation system, where the product ωr2 must exceed a threshold to generate bistability. We quantify this cooperative relationship between ωi and r further in Appendix I1 by deriving a necessary condition for bistability where

r>1pactmax-pactmin+minω2,ω1pactmax/2, (22)

with pactmax and pactmin defined in Eqns. 6 and 7.

Increasing r systematically expands the range of cooperativity values that can support bistability in the range of r swept in Fig. 18. We observe from the figure that at low production rates, bistability only arises when both repressors exhibit stabilizing cooperative binding to DNA (ωi>1). As r increases, this constraint relaxes: bistability becomes possible even without cooperativity (ω1=ω2=1), and for sufficiently high r, bistability can occur even in the presence of destabilizing interactions between the two repressors (ωi<1). Intermediate production rates typically require at least one positively cooperative repressor.

In auto-activation systems, the effective Hill coefficient can vary above or below one, but bistability only occurs when it exceeds one. In contrast, for mutual repression, the effective Hill coefficients for R1 and R2 vary with parameters but remain strictly greater than one when ω1,ω2>0 (Appendix I1), making the Hill coefficient less informative about the existence of bistability. Even in models using empirical Hill functions to describe the production terms, Hill coefficients above one are not always sufficient for bistability, and computational studies show that extended network interactions can yield bistability even with coefficients below one [70].

2. Timescale for stabilization

Beyond identifying the final steady state of the mutual repression system, it is important to characterize the time required for the system to reach steady state starting from different initial conditions. We define the relaxation time τ as the maximum of the times τ1 and τ2 taken by the two trajectories, R1(t) and R2(t), to reach 95% of their respective steady states. Fig. 19(A) illustrates the influence of the initial condition on the final steady state in a symmetric system, where K=1, ω1=ω2=7.5, and c1=c2=10-6M. The figure reveals that the phase space is divided into two basins of attraction, each leading to one of the two stable steady states. The separatrix, plotted in green, denotes the boundary between these basins, and is derived analytically in Appendix J.

Figure 19:

Figure 19:

Dynamics and relaxation time in a mutual repression system. (A) Influence of the initial condition on the final steady state for a symmetric case (K=1,ω1=ω2=7.5,c1=c2=10-6M). Two stable steady states and one unstable steady state are shown. The colored regions correspond to sets of initial conditions that converge to each respective stable steady state. The green curve separating these regions is the separatrix. Dimensionless relaxation times τ1 and τ2 are defined as the time required for the system to approach within thresholds ϵ1 and ϵ2 of the stable fixed points, where these thresholds correspond to 95% of the respective steady-state values. The global relaxation time of the system, denoted τ, is then defined as the maximum of the two: τ=maxτ1,τ2. (B-C) Dimensionless relaxation time as a function of initial concentrations of repressors R1 and R2. Panel (B) corresponds to the symmetric case, whereas panel (C) plots an asymmetric case (K=1,ω1=50,ω2=7.5,c1=510-6M,c2=10-6M).

Fig. 19(B) and (C) quantify the relaxation time τ across the phase space (R10,R20) for both symmetric and asymmetric parameter regimes. In both cases, the relaxation time is shorter when the initial condition lies far from the separatrix and near the final stable steady state. In contrast, initial conditions close to the separatrix result in significantly longer relaxation times, as the system evolves slowly near the unstable fixed point before diverging toward a stable state. Exactly on the separatrix, the relaxation to steady state is more difficult to quantify because the system may remain indefinitely near an unstable manifold without converging to either stable fixed point. In practice, however, even minimal noise in a real system will eventually drive the system away from this unstable region toward a stable state. For this reason, we disregard the white line—signifying artificially short relaxation times—observed exactly on the separatrix in Fig. 19(B). This feature is absent in Fig. 19(C), as the separatrix has a more complex shape and the numerical sweep over initial conditions does not sample it precisely.

In the asymmetric case (Fig. 19(C)), with ω1=50,ω2=7.5, and distinct inducer concentrations (c1=510-6M,c2=10-6M), the phase space becomes skewed. The separatrix, corresponding to the ridge of maximal relaxation time in the grayscale colormap, delineates the boundary between the two basins of attraction. We do not overlay it explicitly, as doing so would interfere with the visualization of the relaxation times, which are particularly sensitive near this boundary. Compared to Fig. 19(B), where the separatrix coincides with the diagonal R10=R20 due to the symmetry of the system, we observe that the separatrix is now deformed, and the relative sizes of the basins of attraction have shifted. Despite these geometric changes, the maximal relaxation times across both regimes remain comparable. This indicates that, although asymmetry reshapes the phase space and can affect which attractor is reached, it does not substantially alter the overall timescale required for the system to stabilize. The main contribution to long relaxation times remains the initial conditions–and how close they are to the separatrix–regardless of symmetry.

This whole section has had as its primary ambition to carefully consider the famed mutual repression genetic switch from the new perspective in which the two repressors are controlled separately by different effector molecules. We have seen that the steady-states and the dynamics in this case are extremely rich, making it clear that there is much freedom in the biological context to exploit different kinds of behavior.

IV. KINETICS AND TIME DELAYS IN FEED-FORWARD LOOPS

In this section, we consider gene circuits whose functionality appears in their dynamics rather than in their steady state responses. Specifically, we focus on a three-gene circuit called the feed-forward loop shown schematically in Fig. 20(A) [51]. Here, we denote the input genes as X and Y, and the output gene as Z. In a feed-forward loop, X regulates Y, and X and Y together regulate Z.X thus controls expression of output Z through both direct and indirect regulatory paths. This network typically features a sign-sensitive delayed or accelerated response (depending on architecture) to a step-wise change in the effector concentration for protein X [51]. That is, while the qualitative nature of the response (delay or acceleration) remains fixed, its magnitude depends on the sign (an increase or a decrease) of the input change. This delayed or accelerated response has been hypothesized to have important biological consequences, particularly in information-processing systems that filter noisy inputs [51]. Beyond their role in shaping temporal responses, coherent feed-forward loops have been shown to attenuate input noise, thereby enhancing the reliability of gene expression [7173].

Figure 20:

Figure 20:

The coherent feed-forward loop. (A) Schematic representation of the coherent feed-forward loop. Expression of output protein Z is controlled by expression of protein X, either by direct activation or indirectly, first activating expression of Y which in turn activates Z. The regulatory circuit is coherent because both pathways have the same activating effect on Z. (B) Thermodynamic states, weights, and rates for expression of activator Y and output protein Z. Note that the model assumes both X and Y can bind together to activate expression at their respective binding sites, interacting with cooperativity ω.

Interestingly, there are various architectures of feed-forward loop depending on whether X and Y work together or at cross purposes. We will largely focus on the particular architecture where X activates Y and Z, and Y also activates Z, which is the so-called type I coherent feed-forward loop, with the word “coherent” attached to this architecture since X and Y alter the expression of Z in a coherent manner. We consider this particular motif primarily because at the time of the most recent census of regulatory architectures in E. coli, this version of the feed-forward loop appeared the most frequently [74, 75]. The logic of our analysis can be applied to any of the other feed-forward architectures as well.

Previous literature explores feed-forward loops from a dynamical systems perspective using Hill functions to model transcription factor-DNA interactions and considering the effector concentration for X as a Boolean variable that is either fully on or fully off [51]. We build upon that earlier analysis also by making a systematic search for network parameters that gives rise to various functions. Our goal is to expand the theoretical understanding of the feed-forward loop architecture by incorporating the thermodynamic model to describe transcription factor binding to the DNA and to explicitly include effector function. Specifically, we explore what gives rise to the dynamical features of the feed-forward loop, the robustness of such features, and the effect of continuously tuning the effector concentration.

As usual when writing the dynamical equations, we begin with the states, weights and rates for the regulatory architecture of interest. Fig. 20(B) provides the states, weights and rates for the coherent feed-forward loop architecture, where we assume one binding site per transcription factor. In light of the states and weights, we can write the time-evolution equations for the coherent feed-forward loop as

dYdt=-γY+r0Y+r1YpactXcXXKXY1+pactXcXXKXY (23)

for the regulation of Y by X and

dZdt=-γZ+r0Z+r1ZpactXcXXKXZ+pactYcYYKYZ+r2ZωpactXcXXKXZpactYcYYKYZ1+pactXcXXKXZ+pactYcYYKYZ+ωpactXcXXKXZpactYcYYKYZ, (24)

for the regulation of Z by both X and Y. We assume here that the two proteins Y and Z have the same degradation rate γ. The production rates and dissociation constants are assumed to be different in general for each thermodynamic state. Specifically, riG denotes the production rate of gene G when i number of transcription factor are bound. Further, KG1G2 denotes the dissociation constant of transcription factor G1 binding to gene G2.ω is the cooperativity, which takes into account the extra interaction energy between X and Y when bound to the DNA. Finally, the probabilities pactXcX and pactYcY scale the activity of transcription factors X and Y. Note also that we consider different effectors cX and cY for the two genes that can be varied independently. These functions may be distinct in principle, and the analytic discussions here make no assumption regarding their nature. For numerical results, however, we assume these probability functions, and thus effector activity functions, to be the same regardless of target transcription factor for simplicity.

We non-dimensionalize the equations by using 1/γ as our unit of time and KXY as our measure of concentration. In light of these conventions, we arrive at

dYdt=-Y+r0Y+r1YpactXcXX1+pactXcXX, (25)

and

dZdt=-Z+r0Z+r1Z𝒳+𝒴+ωr2Z𝒳𝒴1+𝒳+𝒴+ω𝒳𝒴. (26)

To make subsequent analysis less cumbersome, we further introduce 𝒳=pactXcXX/KXZ and 𝒴=pactYcYY/KYZ as simpler notation for the effective regulatory contributions of X and Y to Z expression. The bar indicates quantities where time is measured in units of 1/γ, and where concentration and dissociation constants are measured in units of KXY. Specifically, we define dimensionless dissociation constants KXZ=KXZ/KXY and KYZ=KYZ/KXY. The rates are then in units of γKXY. Note that this model fixes the concentration of X for simplicity, such that its activity regulating Y and Z depends entirely on effector concentration cX.

A. Characterizing delay responses in coherent feed-forward loops

Previous work has shown that coherent feed-forward loops can delay a system’s response to an input signal [51]. Here, we demonstrate from our thermodynamic modeling perspective the analytic origins of this delay. In particular, we rigorously define how the introduction of an indirect but coherent path for regulation of output Z affects its response.

Suppose that we keep the effector concentration cY fixed, and that at time t=0 the effector concentration cX changes sharply from an initial concentration cXi to a final concentration cXf such that input X activity either increases (an “ON” step) or decreases (an “OFF” step). This then means that

𝒳(t)=𝒳i=pactXcXiXKXZift0,𝒳f=pactXcXfXKXZift>0. (27)

In the coherent feed-forward loop, since all the regulatory relations are activation, both Y and Z increase in response to an ON step, and decrease in response to an OFF step.

To understand intuitively how exactly the feed-forward loop regulatory structure responds to such a switch in input signal, let us first consider a simpler scheme in which X no longer regulates Y, leaving X and Y to regulate Z independently with Y(t)=Y fixed at a value. Since we are keeping effector concentration cY fixed constant, this then also means that 𝒴 is constant. We refer to this setting as “simple regulation.”

Before the switch in cX, the simple regulation system is at steady state, with initial output expression Zsi where subscript s denotes simple regulation. After the switch, the system relaxes to a new steady state with final expression Zsf. For t>0, the production term in Eqn. 26 is simply a constant, yielding the following differential equation for Z.

dZdt=-Z+r0Z+r1Z𝒳f+𝒴+ωr2Z𝒳f𝒴1+𝒳f+𝒴+ω𝒳f𝒴-Z+Zsf. (28)

Integrating Eqn. 28, we thus determine that under simple regulation, output Z evolves after the switch in input X signal by a standard exponential behavior defined as

Zst=Zsie-t+Zsf(1-e-t). (29)

By contrast, in the coherent feed-forward loop, the concentration of Y directly depends on X as seen in Eqn. 25. As a result, additional time is needed for Y to evolve from its initial steady-state value Yi to a new value Yf following a change in X. Expressed mathematically, since Y itself is a function of time, the solution of Z to Eqn. 26 is not strictly an exponential relaxation to steady state. More formally, output expression for the coherent feed-forward loop evolves by a function of the form

Zt=Zie-t+Zf1-e-t+Θt, (30)

where Zi is the initial steady state in the feed-forward setting before the change in cX, and Zf is the final steady state after the change that Z(t) relaxes to eventually. We derive Eqn. 30 explicitly from Eqns. 25 and 26 in Appendix K. Note that the sum of the first two terms in Eqn. 30 describes behavior of the same form as simple regulation in Eqn. 29. Therefore, by rescaling Eqn. 29 we can treat the exponential portion of Eqn. 30 as an equivalent Zsimple with the same relaxation dynamics as observed for simple regulation. We can also then express output response for the feed-forward loop as

Zt=Zsimplet+Θt. (31)

We thus observe that the feed-forward loop’s output response differs from behavior in simple regulation by a function Θ(t). Analytically solving Eqns. 25 and 26 results in

Θt=-ΦΔ𝒴S2e-tlogSet-Δ𝒴1+ω𝒳fS-Δ𝒴1+ω𝒳f. (32)

Here, Θ(t) depends on the three quantities Δ𝒴, Φ, and S. First, the quantity

Δ𝒴=𝒴f-𝒴i=pactYcYKYZYf-Yi (33)

denotes the total change in quantity 𝒴 in response to the change in input effector concentration cX.Δ𝒴 thus contains implicit information about regulation of Y by X from its evolution as defined in Eqn. 25. Θ(t) also depends on the coefficient

Φ=ω𝒳f2r2Z-r1Z+ω𝒳fr2Z-r0Z+r1Z-r0Z, (34)

which encodes how the rates and cooperativity regulate output Z expression as a function of input signal 𝒳f. Interestingly, we note that all types of feed-forward loops have the same solution as Eqn. 32, except with a potentially different Φ. Appendix K4 discusses this in more detail. Finally, the quantity

S=1+𝒳f+𝒴f+ω𝒳f𝒴f (35)

is the sum of the dimensionless weights for all possible regulatory states with zero, one, or two transcription factors bound, at final concentrations of active X and Y.

Fig. 21(A) highlights the coherent feed-forward loop’s delayed response to changes in input signal for a specific set of parameters. In our simulations, we set the rates r0G to zero, indicating that this system requires activator(s) to be bound to express output Z. In the top plot of Fig. 21(A), the effector concentration cY is fixed such that pactYcY is always at high activity, and the effector concentration cX jumps such that pactXcX reaches high activity as a step function (ON step), and then back to low activity (OFF step). The bottom plot of Fig. 21(A) shows how the output Z evolves over time in response to a step function input X activity, evolving and stabilizing to a high Z value before the OFF step in pactXcX causes the output concentration to decay back down to steady state value zero. We observe that, compared to simple regulation, the change in output concentration Z is slower when responding to both an increase and a decrease in input activity, matching our expectation.

Figure 21:

Figure 21:

Delay in coherent feed-forward loop response compared to simple regulation. (A) The input signal is applied by tuning cX as a step function: from high (10−4 M) to low (10−7 M) at t=6, and back to high at t=18.cY is kept constant at low (10−7 M) throughout. The effector concentration cY is held constant at 10−7 M. These inputs determine the activation probabilities pactX and pactY, shown in red and purple, respectively. The second panel plots the time evolution of the dimensionless output concentration Z(t) under feed-forward and simple regulation schemes, with r0Y=r0Z=0, r1Y=r1Z=2, r2Z=10,KXZ=KYZ=1, and ω=1. (B) Schematic demonstrating the two ways to equivalently quantify the delayed response of the feed-forward loop compared to simple regulation, captured by the shaded area between the two curves. One can either integrate over individual time delay measurements Δt(Z) as a function of Z, or equivalently integrate the difference in responses Θ(t) as a function of t.

Fig. 21(B) visualizes how to quantify this delay in the time it takes the feed-forward system to reach a given output concentration as it responds to an input pulse. The diagram on the left illustrates the response to the ON step, in which the output starts to increase from Zi to Zf at (dimensionless) time t=0. If we choose a value of Z in this time frame, we observe that it takes longer to reach this value on its way to steady state Zf in the feed-forward loop setting than in simple regulation. We highlight one such horizontal distance between the two curves as the time delay Δt(Z). Explicitly, we define Δt(Z) to be the difference between the time it takes for simple regulation to reach a given Z and that for feed-forward loop. Δt(Z)<0 signifies a delay and Δt(Z)>0 signifies acceleration. We can then compute the average time difference observed between the two curves by integrating Δt(Z) over the range of output Z, and normalizing by this range. Therefore, for a given step function change in input X activity, the average delay during the system’s evolution toward its new final steady state is

Δt=1Zf-ZiZiZfΔtZdZ. (36)

Note, however, that it is not straightforward to derive an expression for Δt(Z). Instead, since this integral geometrically captures the area between the two curves, we can equivalently evaluate this area as shown in the second diagram of Fig. 21(B) by integrating vertical slices through this shaded region. At a given time t, the vertical dotted line corresponds to the difference in output response between the two curves, defined by the function Θ(t) previously derived in Eqn. 32.

From this description, we can then also derive the average time delay from the offset Θ(t), and thus arrive at the equivalent definition

Δt=1Zf-Zi0Θtdt. (37)

Notice that the absolute value on Zf-Zi is dropped to match the sign of Δt in Eqn. 36. Substituting Θ(t) from Eqn. 32 and evaluating the integral, the average time delay becomes

Δt=ΦZf-Zi-1S1+ω𝒳flog1+𝒳f+𝒴i+ω𝒳f𝒴iS. (38)

This result highlights that Δt depends on both the initial state and the final state of 𝒳 and 𝒴. In fact, it is this dual dependence that causes the delays in response to ON and OFF steps to differ in Fig. 21(A). Switching from ON to OFF and vice versa simply swaps the initial and final expression states of 𝒳 and 𝒴. Applying these transformations 𝒳i𝒳f and 𝒴i𝒴f in Eqn. 38 shows that the magnitude of the average delay Δt is in general not conserved under the exchange of initial and final states. It is therefore this asymmetric dependence on initial and final states that directly leads to differences in output responses to ON and OFF steps in the coherent feed-forward loop.

From Eqn. 38, we can analytically deduce whether a feed-forward loop delays or accelerates output response from the sign of Δt. In Appendix K2, we prove that in general Δt0 for the coherent feed-forward loop, leading to delay for both the ON and OFF steps as seen in the example of Fig. 21(A).

B. Robustness of delayed response for different coherent logic gates

While Fig. 21(A) demonstrates delays in both the ON and OFF step of the feed-forward loop for an arbitrary choice of the dimensionless rates, dissociation constants, and cooperativity, distinct choices for this set of parameters lead to different magnitudes of delay. Returning to Fig. 20, the thermodynamic states and weights listed here are defined generally such that all of the states can contribute to transcription factor expression, and this is reflected in the example feed-forward loop shown in Fig. 21(A). However, certain alternative choices for parameters can carry physical significance because they restrict the regulatory states allowing expression to only a subset of those depicted in Fig. 20. How would the feed-forward loop’s behavior differ, for example, if expression could only be enhanced when both activators are bound? We use the framework of logic gates to define such unique categories for regulatory conditions.

Specifically, we highlight three commonly-encountered logic gates—the AND, XOR, and OR gates. We will assume here that all gates can express output Z at a basal level, as defined by rate r0Z for the state. Each logic gate is then characterized by a different set of parameters for the states in which one or both activators can be bound, and these parameters determine whether a given state’s expression is enhanced or remains unaffected at the basal level.

In the AND gate, Z expression is enhanced only when both X and Y are bound. From the description in Fig. 20, this corresponds to systems in which cooperativity ω is nonzero and r2Z>r1Z=r0Z, such that X and Y have no activating effect on basal expression unless simultaneously bound. In the XOR gate, X and Y cannot be bound at the same time (ω=0), and single-activator bound states enhance Z expression (r1Z>r0Z). Finally, the OR gate allows enhanced expression when either X,Y, or both are bound, and broadly applies to systems for which r2Zr1Z>r0Z. Note that the expression rates when one or both transcription factors bind can differ.

In previous work, XOR and AND gates have been reported to exhibit sign-sensitive delays in response to a signal change in cX: the XOR gate feed-forward loop delays the OFF step but not the ON step, while the AND gate delays the ON step but not the OFF step [51]. While these descriptions hold for certain parameter choices, it remains unclear how robust these patterns are across parameter space as the conditions governing regulatory interactions change. We will now examine the conditions under which the feed-forward loop response delays both ON and OFF steps, delays only one, or delays neither as we compare the different types of logic gates.

To evaluate the robustness of time delays across different types of logic gates, we sweep across parameter space and find regions with high average delay, Δt. For now, we choose to fix rate and cooperativity parameters and sweep across the two-dimensional space KXZ,KYZ. The motivation is to find the suitable dissociation constants given a logic gate, in the case in which the production rates and cooperativity must satisfy certain requirements. For example, in the XOR gate, ω=0 is fixed and is not a tunable parameter.

In Fig. 22, we show a parameter sweep for three sets of rates and cooperativity parameters that correspond to three different logic gates. For each set of parameters, both ON and OFF steps are studied. These calculations inspire several observations. First, for a given step undergone by pactXcX, we can computationally identify a maximum Δt with respect to all other parameters. Further, this maximum is different for ON and OFF steps. For example, in Fig. 22(i), specific to the input step chosen here, we observe that in all logic gates the maximal Δt achievable is about 1 for the ON step and 4.5 for the OFF step. To tie this back to units of time, we consider E. coli. Here, proteins tend to be stable over the timescale of a cell cycle and hence the dilution resulting from cell division becomes the effective degradation rate. Taking a rate of degradation of order γ=10-2min-1, we then get a maximum delay of 500 min, which is more than 8 hours! The asymmetry between maximal Δt the ON and OFF steps resonates with the analytic discussion in the previous section. We show computational evidence in Appendix L that sweeps across all other parameters indicate the existence of a global maximum Δt. Note that the exact value and existence of this maximum is a result specific to function pactcX and allosteric parameters we chose to describe effector activity in Fig. 5.

Figure 22:

Figure 22:

Magnitude of average time delay observed across parameter space in the ON and OFF steps of different coherent feed-forward logic gates. Each colorplot shows Δt as a function of KXZ and KYZ. ON (row (i)) and OFF (row (ii)) steps are defined by the same cX step function as in Fig. 21. Each panel represents a different logic gate — (A) the XOR gate, (B) the AND gate, and (C) the OR gate. For each gate, we select a set of KXZ,KYZ that exhibit unexpected behaviors, and in row (iii) plot the corresponding feed-forward and simple regulation trajectories observed from numerical integration. The cX and cY signal for these trajectories are the same as in Fig. 21. The XOR gate parameters are r0Y=r0Z=r1Z=0, r1Y=r2Z=2, and ω=10. The AND gate parameters are r0Y=r0Z=0, r1Y=r1Z=2, and ω=10. The OR gate parameters are r0Y=r0Z=0, r1Y=r1Z=2,r2Z=10, and ω=1, which are the same as in Fig. 21.

Second, both the XOR and AND gates exhibit the expected sign-sensitive delay across a substantial portion of the (KXZ,KYZ) parameter space, though not uniformly. In particular, Fig. 22(A)(i,ii) shows that for the XOR gate, the lower half of this space yields negligible ON-step delay but a pronounced OFF-step delay. Conversely, Fig. 22(B)(i,ii) reveals that for the AND gate, the upper half of the space yields negligible OFF-step delay and a pronounced ON-step delay. Nevertheless, in specific regions of the (KXZ,KYZ) parameter space—namely, the upper left quadrant for the XOR gate and the lower right quadrant for the AND gate—neither the ON nor the OFF step exhibits any appreciable delay. More intriguingly, certain extreme choices of dissociation constants contradict the expected behavior: the XOR gate can show delay on both ON and OFF steps, and the AND gate can only delay the OFF step. Example trajectories corresponding to these atypical regimes are shown in Fig. 22(A,B)(iii).

Finally, the OR gate can produce sizable delays, but only for more extreme values of the dissociation constants. Near the region where KXZKYZ1—which corresponds to similar binding strengths between X,Y, and Z with the DNA—the delays for both ON and OFF steps are minimal. This suggests that in some biologically relevant regimes, where dissociation constants are typically of the same order of magnitude, the OR gate is the least effective at generating a temporal delay.

We observe that the average delay Δt in coherent feed-forward loops depends strongly on the dissociation constant KYZ, which sets the binding affinity of Y to the promoter of Z—one of the interactions that distinguishes feed-forward loops from simple regulation. A clear trend emerges: ON steps (Fig. 22(AC)(i)) exhibit stronger delays when Y binding is weak (large KYZ), whereas OFF steps (Fig. 22(AC)(ii)) show stronger delays when Y binding is strong (small KYZ).

Since the feed-forward loop ultimately aims to control Z activation, another relevant feature is the output amplitude ΔZ=Zf-Zi for a given change in input cX. A small ΔZ would imply that Z remains nearly constant, making the response uninformative. However, we show in Appendix L that ΔZ scales linearly with the production rates. Although the dependence of ΔZ on other parameters is non-trivial, globally the effect of the rates dominates. We thus reserve a detailed discussion of the optimization of ΔZ values for Appendix L and remain focused here on the average delay Δt.

C. Existence of pulse in incoherent feed-forward loop

We now turn our attention to the incoherent feed-forward loop depicted in Fig. 23. This architecture is a commonly observed motif in E. coli, where X activates both Y and Z, while Y represses Z [74, 75]. This motif gives rise to a qualitatively distinct dynamical behavior from the coherent feed-forward loop: a pulse in the output Z. A pulse is conventionally defined as a transient trajectory of Z(t) such that there exists a time t where Z(t)>Zf if the response of Z(t) is increasing, or Z(t)<Zf if the response of Z(t) is decreasing. In incoherent feed-forward loops, an ON step signal where pactXcX increases does not necessarily produce an increasing response in Z(t), as activation induced by increasing X competes with repression induced by increasing Y. This is precisely the incoherence in the name of such circuits.

Figure 23:

Figure 23:

The incoherent feed-forward loop. (A) Schematic representation of the incoherent feed-forward loop. Expression of output protein Z is controlled by expression of protein X, either by direct activation or indirectly, first activating expression of Y which then represses Z. The regulatory circuit is incoherent because the pathways have opposing effects on Z. (B) Thermodynamic states, weights, and rates for expression of repressor Y and output protein Z.X and Y interact with cooperativity ω, but bound repressor suppresses expression regardless of activator presence.

Referring to the states, weights, and rates shown in Fig. 23, the non-dimensional dynamics of the system are described by

dYdt=-Y+r0Y+r1YpactXX1+pactXX (39)

and

dZdt=-Z+r0Z+r1Z𝒳1+𝒳+𝒴+ω𝒳𝒴, (40)

with 𝒳=pactXcXX/KXZ and 𝒴=pactYcYY/KYZ. Again the bar indicates the time in units of 1/γ, concentrations and dissociation rates in units of KXY and rates in units of γKXY.

Intuitively, pulses emerge because of the delay in the repression exerted by Y. Following an ON step in cX,Y increases exponentially to its final value. In the early phase of the response, X already activates Z strongly, but Y has not yet accumulated enough to exert repression. As a result, Z temporarily overshoots its final steady state. This behavior is depicted in Fig. 24(D), where the blue curve exhibits a pronounced pulse, in contrast to the monotonic exponential relaxation of the simple regulation output (orange).

Figure 24:

Figure 24:

Existence of a pulse and acceleration in the incoherent feed-forward loop. Parameters used here are r0Y=r0Z=0, r1Y=r1Z=2, and ω=0. (A) Average acceleration |Δt| during ON step across the phase space (KYZ,KXZ), computed only for parameter sets where no strong pulse is observed. For the parameters we chose, the ON step in pactXcX coincides with Z(t) having an increasing response. (B) Pulse amplitude Zmax-Zf across the same phase space, quantifying the transient overshoot above steady state. (C) Example trajectories corresponding to KXZ,KYZ=(1,2). No strong pulse is observed, but the feed-forward loop response is accelerated compared to the simple regulation response. The green curve is the trajectory with the largest Δt without a pulse. The blue and orange curves are the feed-forward loop and simple regulation trajectories, respectively. (D) Example trajectories corresponding to KXZ,KYZ=(1,0.1). A strong pulse is observed.

The accelerating nature of the incoherent feed-forward loop can also be captured analytically by repeating the calculation leading to Eqn. 32 but adapted to this new setting. The modified prefactor

Φ=-r0Z+r1Z𝒳f1+ω𝒳f (41)

is always negative. As a result, the average time difference Δt as defined in Eqn. 37 is always positive, both for ON and OFF steps, meaning that the incoherent feed-forward loop accelerates the response for both transitions. In Appendix M, we remark that the definition and interpretation of Δt is subtle when Z exhibits a pulse. Thus, we only consider Δt when a pulse does not exist, and focus on the difference between the peak of the pulse and final steady state of Z for the pulses.

To further quantify this acceleration, we again compute the average time difference Δt between the feed-forward loop and simple regulation trajectories with Eqn. 37. Fig. 24(A) shows Δt across the phase space defined by KXZ,KYZ, when the output does not present a strong pulse. The definition of “strong” is described in Appendix M. We observe that acceleration is limited to a maximum value of 1. This upper bound arises because the most accelerated trajectory would consist of an instantaneous jump to the final steady state (green curves in Fig. 24(C) and (D)), corresponding to Δt=1 as shown in Appendix M. An example of a feed-forward trajectory that does not exhibit a strong pulse is shown in Fig. 24(C). The region of highest acceleration lies near the boundary separating the strongly pulsed and not strongly pulsed regimes. Interestingly, acceleration is only substantial in a restricted region of parameter space. For example, we see in Fig. 24(A) that high values of KYZ—corresponding to weak binding of Y —lead to negligible acceleration.

In addition to acceleration, the presence and magnitude of a pulse is another hallmark of the studied network. In Fig. 24(B), we map the pulse height, defined as the maximum deviation of Z(t) above its steady state value, across the (KXZ,KYZ) space. Pulses are observed only in a portion of this space—specifically, for small enough values of KXZ, where X binds strongly. The highest pulses occur when both KXZ and KYZ are small, indicating that strong binding of both regulators enhances the transient overshoot in this case.

D. Continuous signal

To conclude our discussion of feed-forward loops, we now consider the system’s response to a continuously tuned effector concentration, rather than an abrupt step change. When the timescale of effector concentration variation, denoted tc, is much shorter than the system’s relaxation timescale, the dynamics resemble those observed under step function inputs. Conversely, when tc is much longer than the relaxation time, the system remains quasi-stationary when the input is evolving, effectively tracking steady states values of the output Z at each time step.

To analyze this quantitatively, we compare tc to the intrinsic relaxation timescales of both the simple regulation case and the coherent feed-forward loop. In coherent feed-forward loops, we have shown that the output response is consistently delayed relative to simple regulation. However, the precise relaxation timescale of this architecture is not straightforward and depends strongly on the biochemical parameters. As a result, we use the relaxation timescale of the simple regulation system—which is 1 in units of 1/γ—as a reference estimate for the order of magnitude of the feed-forward loop’s relaxation time. We can thus define two limiting regimes: a fast tuning regime where tc1, and a slow tuning regime where tc1.

Before proceeding to the analysis of the delay, we must first clarify the definition of simple regulation in the case of a continuous signal. We adopt the same definition as the step function case, where we fix concentration Y=1 here. Further, in our previous analysis, when cX(t) is a step function, the choice of Y does not affect the dynamics of relaxation of the simple regulation output. However, when cX(t) is a continuous function, the choice of Y changes these dynamics slightly. Fortunately, the effect of Y is small and does not qualitatively change our observations in the rest of the section, allowing us to continue with the comparison between feed-forward loop and simple regulation (for a more detailed discussion, see Appendix N).

We numerically integrate the dynamical equations under a continuously changing cX(t) and illustrate the result in Fig. 25, where we analyze the response of a coherent feed-forward loop operating as an XOR gate. In this setting, we numerically define the timescale of effector concentration variation tc as the time it takes for pactXcX to change from 0.2 to 0.8 or vice versa. tc indicates the time it takes for the switch to be flipped on or off, as regulated by the effector concentration cX. When the effector concentration cX(t) is tuned rapidly as shown in Fig. 25(A), with tc0.24, we observe a large delay in the output Z(t) on the OFF step and almost no delay on the ON step, consistent with the step function dynamics discussed previously. In contrast, when cX(t) varies slowly, as depicted in Fig. 25(C), with tc5.18, both the feed-forward loop and the simple regulation trajectories become dominated by their respective steady states. They simply track their steady states dictated by cX(t). As a consequence, the responses to ON and OFF steps must be symmetric, as there is exactly one steady state corresponding to a specific cX. Interestingly, the OFF delay is preserved, while for the ON step the feed-forward loop response is accelerated compared to simple regulation. The magnitude of the ON step acceleration is similar to that of the OFF step delay, as required by symmetry. We note that the magnitude of the acceleration/delay depends on the choice of Y in simple regulation.

Figure 25:

Figure 25:

Feed-forward loop response to the rate of continuous tuning of effector concentration. From left to right, the rate of tuning effector concentration slows down while every other parameter is kept constant. Each ON step is a loglinearly increasing function from cXmin=10-4M to cXmin=10-7M across some time; an OFF step is the reverse. From left to right, the timescale of effector concentration variation are tc=0.24, tc=1.65, tc=5.18.tc, as defined in the main text, is the time it takes for pactXcX(t) to increase from 0.2 to 0.8. Y is set to be 1 at all times for simple regulation. Parameters used are the XOR gate parameters in Fig. 22.

Nevertheless, for all choices of Y, the feed-forward loop accelerates the ON step and delays the OFF step when the concentration of effector cX is slowly tuned. Between the two limits, the feed-forward loop response smoothly transitions as the tuning rate of cX decreases. An example in this regime is shown in Fig. 25(B), with tc1.65, where we begin to observe the ON step acceleration, despite its magnitude being smaller than in Fig. 25(C). These results demonstrate that the dynamics of feed-forward loops under continuously varying effector concentrations are governed by the timescale of input change: rapid tuning reproduces the asymmetric ON- and OFF-step delays seen with step inputs, whereas slow tuning yields symmetric responses where ON-step acceleration balances OFF-step delay.

V. DISCUSSION

The history of modern molecular biology has been a dazzlingly successful exploration of the way in which genes dictate the function and dynamics of the cells making up organisms of all kinds. One of the greatest success stories of that history is the development of our understanding of how genes are connected together in genetic circuits [14], giving rise to an array of stereotyped regulatory motifs such as switches, oscillators, double negative networks and feed-forward networks, to name but a few examples [74]. We note that despite all of this progress, there remain gaping holes in our knowledge of how most genes are regulated. Even in our best understood organisms such as E. coli, we lack any knowledge of how more than 60% of its genes are regulated [76]. As a result, we expect that with the advent of the high-throughput era in biology, ever more genetic circuits like those we discussed here will be discovered.

In addition to our ignorance of the genetic circuits themselves, our understanding of the proteins that mediate those circuits is also very limited. In particular, we often don’t know how effector molecules alter the activity of the transcription factors that control these genes [77]. The key point here is that the action of proteins such as transcription factors is often altered through the binding of effector molecules, which induce allosteric conformational changes that in turn change the state of activity of those transcription factors. However, for many genes, we still remain ignorant of which effector molecules effect those changes. The central thesis of this paper is that, in fact, most gene circuits have their activity tuned by precisely these kinds of effector molecules and as a result, we need to revisit the theoretical analysis of such circuits to account for the effect of allosteric induction.

In parallel with the impressive progress in molecular biology and the dissection of the rules of regulation, huge progress was made in the study of the behavior of dynamical systems in contexts of all kinds [68], and with special reference to genetic circuits themselves [42]. However, when theorists have used dynamical systems frameworks to explore the behavior of such circuits, they have largely adopted an approach in which those circuits are tuned in abstract terms using model parameters such as degradation rates γ, mRNA production rates r and binding constants Kd. As was so importantly discovered in the 1960s, typically these parameters are in fact “tuned” in the context of living cells through the action of allosteric transitions of transcription factors between active and inactive conformations as a result of the binding of effector molecules[3340]. The history of dynamical systems in these problems largely leaves the all-important effector molecules out of this story, only considering them implicitly. In this paper we have undertaken a systematic analysis of the role of such effectors in governing the function and dynamics of a variety of fundamental genetic regulatory motifs.

The overarching theme of the work described here is that whereas typical dynamical systems approaches to genetic networks feature the number of transcription factors such as A(t) for activator concentrations and R(t) for repressor concentrations, the variable that the cell actually “cares about” is the active number of activators and repressors. There are a variety of well-defined statistical mechanical approaches that allow us to compute this active fraction by multiplying the total number of transcription factors by the function pact(c) as dictated by the Monod-Wyman-Changeux model, for example, and highlighted in Eqn. 4. The power of this approach is that now the parameters governing properties such as bistability in genetic circuits will be tuned by experimentally and biologically accessible parameters such as the concentrations of effector molecules.

Throughout the paper, we have shown how the tuning variable of effector concentration makes it possible for the dynamical systems describing genetic circuits to range across their phase portraits. We began with perhaps the simplest of such circuits, the auto-activation motif, and showed how tuning effector concentration narrows the range of possible behaviors relative to those found in an unconstrained dynamical system perspective. We also availed ourselves of the opportunity to compare and contrast the conventional Hill function approach to transcription factor-DNA binding and the more mechanistically detailed thermodynamic models that we systematically explore throughout this work.

One interesting conclusion of this comparison between models based on the full set of states and weights demanded by thermodynamic models and the more phenomenological Hill function is that, for certain parameter regimes, each approach will display dramatically different circuit dynamics (i.e., monostability vs. bistability). This insight emphasizes the need to carefully dissect the quantitative parameters underlying the description of gene regulatory architectures in order to justify whether a Hill function description, which is a limiting case of the thermodynamic description, is warranted.

We also used the auto-activation motif as an opportunity for a careful analysis of the temporal relaxation of these genetic circuits to their terminal steady state. That analysis revealed that for initial conditions that are not “far” from the stable fixed points, the relaxation to steady state is exponential with a time constant dictated by the derivative of the nonlinear protein degradation/production function evaluated at the fixed point. For initial conditions that start near to the unstable fixed point, the dynamics are richer.

With the description of the induced auto-activation circuit in hand, we turned to the very important mutual repression motif which is ubiquitous in prokaryotes and eukaryotes alike. Here, again, the capacity to independently tune the effector concentration for each repressor revealed a large flexibility in how cells and synthetic biologists alike can decide to tune the dynamical behavior of this genetic circuit from dictating its steady state behavior to the dynamics of the repressors as they converge to those steady state values.

Finally, we undertook a dissection of the ubiquitous feed-forward loop. Our analysis shows that the dynamic behavior typically associated with feed-forward loops in response to input effector signals is more nuanced and parameter-dependent than previously appreciated [52]. For the coherent feed-forward loop, we analytically confirm the presence of delay in output response compared to simple regulation. However, we show that both the magnitude and the sign-sensitivity of these delays depend on system parameters such as dissociation constants, production rates and cooperativity. This rich range of qualitatively distinct behavior remains true even within the different categories of logic gates that can emerge from special combinations of these parameters. Conversely, incoherent feed-forward loops accelerate output responses compared to simple regulation and can generate transient pulses—again only in certain regions of parameter space.

Within this analysis of feed-forward loops, we demonstrate the crucial roles of effector concentration in our models. The leakiness of pact(c), for example, influences key metrics such as delay time. We also highlight how the sigmoidal shape of pact(c) enables a continuous change in effector concentration to be translated to a sharp signal when tuning the probability of transcription factors being active. Overall, while the dynamical behaviors of feed-forward loops can be rich, they are not always guaranteed. This emphasizes how behavior emerges from a delicate interplay of biochemical parameters rather than rigid circit logic alone, and underscores the need for further experimental and theoretical efforts toward understanding the functions and dynamics of feed-forward loops.

All told, our efforts demonstrate that there is great flexibility inherent in the endogenous signaling modalities adopted by living cells to be contrasted with the way in which model parameters are artificially tuned in many dynamical systems approaches to these same problems. We are excited for experimental efforts to make a substantial push to solve the huge puzzle of the allosterome, opening the door to more realistic analyses of genetic circuits from a dynamical systems perspective.

ACKNOWLEDGMENTS

We have benefited enormously from conversations with Leonid Mirny, Ned Wingreen, Julie Theriot, Marc Kirschner, Jean Pierre Changeux, Jane Kondev, all of whom have helped us better understand the twin subjects brought together here, namely, genetic circuits and allosteric regulation of the macromolecules of the cell. We are grateful to the NIH for support through award numbers DP1OD000217 (Director’s Pioneer Award) and NIH MIRA 1R35 GM118043–01. This research is funded in part by the Gordon and Betty Moore Foundation GBMF12214 (doi.org/10.37807/GBMF12214) to RP and RJR as part of the grant on the “Listening to Molecules” project which has as its central mission to better understand the underlying mechanisms of allosteric regulation. RP is deeply grateful to the CZI Theory Institute Without Walls. HGG was supported by NIH R01 Awards R01GM139913 and R01GM152815, by the Koret-UC Berkeley-Tel Aviv University Initiative in Computational Biology and Bioinformatics, by a Winkler Scholar Faculty Award, and by the Chan Zuckerberg Initiative Grant CZIF2024–010479. H.G.G. is also a Chan Zuckerberg Biohub Investigator (Biohub – San Francisco).

Appendix A: Thermodynamic model equivalence to description with polymerase

In this section, we demonstrate the equivalence of the thermodynamic models used throughout the paper in which we essentially ignored the presence of RNA polymerase (RNAP), to those that explicitly incorporate regulatory interaction with the RNA polymerase. To illustrate the comparison between those that explicitly treat polymerase and those that do not, we begin by examining the auto-activation switch with which the paper opened.

1. Coarse-graining the auto-activation model

The statistical mechanical model for autoactivation depicted in Fig. 2 implicitly accounts for interaction between the activator and polymerase. Fig. 26 represents the complete accounting of the thermodynamic states, weights and rates, now explicitly accounting for all of the possible polymerase bound states and denoting the interaction energy between polymerase and activator as εap. In light of this complete set of states, weights and rates, We can write the dynamical equation for the number of activators as

dAdt=-γA+PKP[r0+2r1e-βεappact(c)AKd+r2e-2βεapω(pact(c)AKd)2]Z, (A1)

where P is the number of copies of polymerase present and KP is the dissociation constant for P.Z is the partition function obtained by summing the statistical weights of all of the states in Fig. 26, which we define as

Z=PKP1+2eβεappactcAKd+e2βεapωpactcAKd2+1+2pactcAKd+ωpactcAKd2=1+PKP+2pactcAKd1+PKPeβεap+ωpactcAKd21+e2βεapPKP=1+PKP1+2pactcAKd(1+PKPeβεap)1+PKP+ωpactcAKd2(1+e2βεapPKP)1+PKP1+PKPZ0. (A2)

Figure 26:

Figure 26:

The auto-activation regulatory circuit. (A) A schematic of the circuit operation. Polymerase binding at the promoter (blue) transcribes the gene (encoded in the green region), producing a protein that can activate its own expression at a sufficient concentration. In our model, an activator can bind at two possible sites to enhance gene transcription. (B) The thermodynamic states, weights, and rates for the auto-activation motif including polymerase binding explicitly. The parameter ω denotes the binding cooperativity between two activators.

Note that our goal at this point is to see if by defining the various “constants” that appear in Eqn. A1 we can show that it is equivalent to Eqn. 8 in which we ignored polymerase altogether. In particular, we need to find effective versions of the parameters Kd, ω, r0, r1 and r2 that have all the polymerase dependence hidden within them. To re-express Z0 as a sum of states with implicit dependence on polymerase, we note that if we define

Kdeff=1+PKP1+PKPe-βεapKd, (A3)

and

ωeff=1+PKP1+PKPe-2βεap1+PKPe-βεap2ω (A4)

then the denominator will have the same form as the denominator of Eqn. 8. Next, we see that if we redefine the rate parameters as

r0eff=PKP1+PKPr0, (A5)
r1eff=PKPe-βεap1+PKPe-βεapr1, (A6)
r2eff=PKPe-2βεap1+PKPe-2βεapr2, (A7)

we recover an equation that is equivalent to the dynamical equation as described in Eqn. 8. Note that for convenience, in Eqn. 8 we have everywhere dropped the superscript “eff” because the notation is way too cumbersome to carry throughout the paper. The key point is that we see that the two approaches are formally equivalent.

However, it is important to always bear in mind that the rate parameters, cooperativity, and dissociation constant used in the reduced representation of the paper are thus effective parameters that implicitly depend on the concentration of polymerase present (P), the strength of polymerase binding to the DNA (KP), and the strength of interaction between activator and RNAP (εap). In a very real sense, this description will lead to a description of the auto-activation switch in which the polymerase serves as a hidden variable. This analysis is extremely interesting because it shows that there is a way to rigorously leave explicit treatment of the polymerase out of the problem.

2. Coarse-graining the mutual repression model

In the main body of the paper, just as we did for the auto-activation motif, we treated the mutual repression motif without making explicit reference to RNA polymerase. We now consider the full set of states, weights and rates illustrated in Fig. 27. The states and weights shown here should be contrasted with those shown in Fig. 14 where all reference to RNA polymerase is absent. We now demonstrate the equivalence of these two descriptions of the mutual repression switch following precisely the same kind of strategy we followed above in the context of the auto-activation switch.

Under this expanded thermodynamic model, the governing equations for the dynamics of R1 and R2 prior to non-dimensionalization can be read off directly from Fig. 27 yielding

dR1dt=-γ1R1+rPKP1+PKP+2pactc2R2K2+ω2pactc2R2K22 (A8)

and

dR2dt=-γ2R2+rPKP1+PKP+2pactc1R1K1+ω1pactc1R1K12. (A9)

These equations can be algebraically manipulated by factoring out (1+P/KP) from the denominator, resulting in the forms

dR1dt=-γ1R1+rPKP1+PKP1+2pact(c2)R2K2(1+PKP)+ω2(1+PKP)pactc2R2K2(1+PKP)2, (A10)
dR2dt=-γ2R2+rP1+PKP1+2pact(c1)R1K1(1+PKP)+ω1(1+PKP)pactc1R1K1(1+PKP)2. (A11)

This formulation reveals that the original equations given in Eqns. 18 and 19 can be recovered through a simple transformation of parameters in which we once again define effective parameters. The effective mRNA production rate is given by

reff=PKP1+PKPr, (A12)

the two Kds for transcription factor-DNA binding are given by

K1eff=(1+PKP)K1 (A13)

and

K2eff=(1+PKP)K2 (A14)

and the two cooperativities are written in effective form as

ω1eff=(1+PKP)ω1 (A15)

and

ω2eff=(1+PKP)ω2. (A16)

This demonstrates that polymerase binding can be absorbed into effective parameters, yielding a reduced model equivalent to the one presented in the main text, with renormalized production rate, dissociation constants, and cooperativities. Once again, the polymerase copy number P and binding strength KP are hidden variables in the context of the bare model, but the results are exact - this is not an approximation valid only in the limit of weak promoters, for example. Note also that as in the case of the auto-activation switch, in the main body of the paper we do not carry around the cumbersome “eff” notation, electing instead to simply use the parameters r, K1, K2, ω1 and ω2 with the convention that those parameters include the hidden variables associated with polymerase.

Appendix B: Minimal bounds on cooperativity and rates for the existence of bistability in auto-activation

The auto-activation system can exhibit bistability, meaning that it can reach different steady states depending on the initial condition. However, this behavior arises only within a restricted range of parameter values, shown in Fig. 11 as the red region. To investigate the conditions for which multiple different steady states are possible, we derive analytic bounds in parameter space. Setting dA/dt=0 and re-expressing in standard polynomial form, A must satisfy

ωpact2A3+pact2-ωr2pactA2+1-2r1pactA-r0=0. (B1)

If the system exhibits bistability, the corresponding polynomial must have three real roots, as a third-order polynomial cannot have exactly two. Physically, these roots correspond to two stable steady states and one unstable steady state. Further, the presence of only one real root indicates that the system is monostable, as discussed in Appendix C.

Figure 27:

Figure 27:

The mutual repression regulatory circuit. (A) Schematic of the operation of the circuit. When the gene for repressor 1 is expressed, the resulting protein downregulates the expression of the gene for repressor 2. Repressor 2, in turn, downregulates the expression of the gene for repressor 1. (B) Thermodynamic states, weights, and rates for expression of repressor 1 including the action of the inducer which tunes the number of active repressors. In our model, a repressor can bind non-exclusively at one of two possible sites within the target promoter region to suppress gene transcription. The parameter ω2 denotes the cooperative strength between two bound repressors R2. (C) Thermodynamic states, weights, and rates for expression of repressor 2 including the action of the inducer which tunes the number of active repressors. The states and weights for the regulation of the promoter responsible for the production of repressor 2 are analogous to those shown for repressor 1. However, the dissociation constant of repressor 1 in this case is given by K1, and the cooperativity term for the interaction of two repressor 1 molecules bound to the DNA is ω1.

To identify conditions for bistability, we search for combinations of ω, r0, r1, r2, and effector concentration c that produce three positive real roots of the polynomial—corresponding to the red region in Fig. 11. We may bound this bistable region of parameter space analytically using Descartes’ rule of signs, which states that for a single-variable polynomial with real coefficients, the number of positive roots of the polynomial is equal to the number of sign changes between consecutive non-zero coefficients minus an even number. In our case, the polynomial in Eqn. B1 must then have either one or three sign changes. Therefore, three sign changes are necessary for the system to allow bistability. Evaluating Eqn. B1, we observe that the coefficient of A3 is strictly positive and the constant term is strictly negative. Thus, three (consecutive) coefficient sign changes are only possible if the second term of Eqn. B1 is negative, and the third term of that same equation is positive. Specifically, the condition on the second term implies that

pact(c)(2-ωr2pact(c))<0pact(c)>2ωr2, (B2)

while the condition on the second term leads to

1-2r1pactc>0pactc<12r1. (B3)

Thus, these two conditions can be combined to yield

2ωr2<pact(c)<12r1. (B4)

1. Necessary condition for bistability at some effector concentration c

Note that if the above condition were to be true at all possible effector concentrations c, the system would always be bistable. Rather, we are more specifically interested in the conditions that would allow bistability for at least one value of effector concentration c0. In other words, there exists a concentration such that

2ωr2<pactc0 (B5)

and

12r1>pactc0. (B6)

If the inequality in Eqn. B5 holds true, then we also know that

2ωr2<pactc0maxc[0,]pact(c). (B7)

This result itself then directly requires the existence of some effector concentration for which Eqn. B5 is true. We can prove this by considering two possible cases. First, if

2ωr2<minc[0,]pact(c)<maxc[0,]pact(c), (B8)

then we know that Eqn. B5 holds true for all concentrations c0. Otherwise, if

minc[0,]pact(c)<2ωr2<maxc[0,]pact(c), (B9)

then Eqn. B5 is true for all non-negative effector concentrations smaller than a threshold concentration

c*=pact-12ωr2, (B10)

(derived from the inverse of Eqn. B5) because pact is a continuous and monotonically decreasing function.

Applying similar logic to the inequality in Eqn. B6, we may thus rewrite the necessary conditions for bistability in Eqns. B5 and B6 as

2ωr2<maxc[0,]pact(c)=11+e-βε, (B11)
12r1>minc[0,]pact(c)=11+e-βεKc2, (B12)

and

ωr2¯>4r1¯, (B13)

where we have recalled the saturation (maximum) and leakiness (minimum) of pact(c) defined in Eqns. 6 and 7. Note that we are in a setting where effector binding stabilizes the inactive form of the activator such that Kc=KA/KI>1. This fixes the values of saturation and leakiness, which would otherwise be switched if Kc<1. After some algebra, we can re-express Eqns. B11 and B12 such that the necessary conditions for bistability are given by

ωr22>1+e-βε, (B14)
1+e-βεKc2>2r1, (B15)

and

ωr2>4r1. (B16)

Following a similar procedure as in the previous section, in the next section we derive a necessary condition for bistability that depends on the concentration of effector c. For fixed parameter values, this condition defines a bounded range of effector concentrations outside of which the system is guaranteed to be monostable.

2. Necessary condition for bistability for a fixed concentration of effector c

We consider the case where the activation probability pact(c) is a decreasing function of the effector concentration c, as seen in Fig. 5. This monotonicity condition, which requires the derivative of the probability function to be negative for all possible c, depends on the parameters of the model, particularly the ratio of dissociation constants Kc. The derivative of pact(c) with respect to c is given from Eqn. 4 by

dpactdc=-21+c/KAeβε-1+Kc1+c/KI1+c/KA2eβε+1+c/KI22, (B17)

which is negative for all c>0 if and only if Kc>1.

Recalling the previously-derived necessary condition for bistability,

2ωr2<pact(c)<12r1, (B18)

we now investigate what constraint this condition imposes on the effector concentration c, assuming the parameters of the system are fixed. First, the inequality

pact(c)<12r1 (B19)

can be re-expressed equivalently using the explicit expression of pact(c) in Eqn. 4 as

g(c)=cKA211+e-βε12r1-11+e-βεKc2+2cKA12r11+e-βεKc-11+e-βε(1+e-βεKc2)+12r1-11+e-βε11+e-βεKc2>0. (B20)

Similarly, the condition

pact(c)>2ωr2 (B21)

is equivalent to requiring that

h(c)=cKA211+e-βε2ωr2-11+e-βεKc2+2cKA2ωr21+e-βεKc-11+e-βε(1+e-βεKc2)+2ωr2-11+e-βε11+e-βεKc2<0. (B22)

We can now apply Descartes’ Rule of Signs to the polynomials g(c) and h(c) to determine when the inequalities are satisfied. Since we are working under the assumption that Kc>1, this means that,

pactmax=11+e-βε>11+e-βεKc>11+e-βεKc2=pactmin. (B23)

For the polynomial g(c), three cases then arise. First, if

12r1>11+e-βε, (B24)

then all coefficients of g(c) are positive and g(c)>0 for all c0, so Eqn. B21 is always satisfied. Second, if

12r1>11+e-βεKc2, (B25)

then all coefficients are negative, and the condition is never satisfied for any c. Finally, if the intermediate case

11+e-βεKc>12r1>11+e-βεKc2 (B26)

holds, then the coefficient of the term proportional to c2 in g(c) is positive, while those of the remaining terms proportional to c1 and c0 are negative. This results in exactly one sign change, so by Descartes’ Rule of Signs, the polynomial g(c) has exactly one positive root. This defines the minimal concentration for bistability, denoted cbistabminr1, and given by

cbistabminr1=KAe-βεKc+2r1-1+e-βε1+Kc22r1-1e-βεKc2-2r1+1. (B27)

Under these conditions, the inequality g(c)>0 holds for all c>cbistabminr1.

We now turn to the polynomial h(c). If

2ωr2>11+e-βε, (B28)

then all coefficients are positive and the polynomial is strictly positive for all c, meaning that the condition in Eqn. B21 is never satisfied. Conversely, if

2ωr2<11+e-βεKc2, (B29)

then all coefficients are negative and the condition is always satisfied. Lastly, in the intermediate case

11+e-βεKc>2ωr2>11+e-βεKc2, (B30)

Descartes’ Rule of Signs again implies exactly one positive root of h(c), corresponding to the upper bound of the bistable region. This upper concentration threshold is denoted cbistabmaxωr2 and given by

cbistabmaxωr2=KAe-βεKc+ωr22-1+e-βε1+Kc2ωr22-1e-βεKc2-ωr22+1 (B31)

Under these conditions, the inequality h(c)<0 holds for all c<cbistabmaxωr2.

Summing up the case-by-case analysis, we derive an effector concentration-dependent necessary condition for bistability. The full set of conditions allowing for bistability in different ranges of effector concentrations is given by

cbistabmax>c>cbistabminif11+e-βεKc2<2ωr2<12r1<11+e-βε,cbistabmax>cif11+e-βεKc2<2ωr2<11+e-βε<12r1,c>cbistabminif2ωr2<11+e-βεKc2<12r1<11+e-βε,c0if2ωr2<11+e-βεKc2<11+e-βε<12r1,nobistabilityif2ωr2>11+e-βεor11+e-βεKc2>12r1. (B32)

As noted, the parameter r0 does not enter into these Descartes-based bounds and thus does not influence the existence of bistability in this analysis. From these expressions, we recover the necessary conditions for bistability, stated in the previous section as

2ωr2<11+e-βε, (B33)
11+e-βεKc2<12r1, (B34)

and

2ωr2<12r1. (B35)

and re-expressed in Eqns. 1012. These conditions can equivalently be rewritten as

ωr2>max21+e-βε,4r1 (B36)

and

r1<min1+e-βεKc22,ωr24. (B37)

From Eqn. B32, we identify necessary conditions under which the system displays bistability for all effector concentrations above a minimal threshold. This corresponds to being in either the third or fourth case of Eqn. B32. These cases are captured by the inequality

ωr2>2(1+e-βεKc2), (B38)

which implies that, for a sufficiently large product ωr2, the system permits bistability across a semi-infinite range of effector concentrations.

To complement the analytical results summarized in Eqn. B32, we compare the derived necessary conditions for bistability with numerically computed bistability regions across different parameters. As shown in Figure 28, the analytically predicted bounds—represented in orange—are in close agreement with the numerically determined region of bistability—shown in red—near the onset of bistability. For larger values of the cooperativity parameter ω or the activation rate r2, as well as for smaller values of the intermediate rate r1, the analytical bounds significantly overestimate the true bistable region. This discrepancy arises because the analytical bounds are necessary but not sufficient conditions, and therefore do not capture the full behavior of the system. Nevertheless, these bounds offer a valuable predictor of the minimal and maximal effector concentrations that can support bistability under a given set of parameters.

Appendix C: Fixed point structure of the auto-activation system as a gradient flow

The auto-activation dynamical system defined in Eqn. 9 is a dynamical system that derives from a gradient. Indeed, we can write this equation as

dAdt=-dVdA, (C1)

with

-dVdA=P(A)1+2pact(c)A+ωpact(c)A2, (C2)

and

P(A)=r0+r12pact(c)-1A+(r2ωpact(c)2-2pact(c))A2-ωpact(c)2A3=ωpact2(c)A-A1A-A2A-A3, (C3)

where A1,A2,A3R, if there is bistability.

Given Eqn. C2, our goal now is to determine the landscape V(A) itself. To that end, we need to integrate Eqn. C2. We invoke the strategy of separation of variables, resulting in

-dV=dA1+2pactcA+ωpactcA2ωpact2cA-A1A-A2A-A3. (C4)

To make progress with this integral, we express the righthand side using partial fraction decomposition. This yields

1+2pact(c)A+ωpact(c)A2ωpact2(c)i=13A-Ai=i=13CiA-Ai. (C5)

We find the coefficients C1, C2, and C3 by multiplying through by the common denominator and evaluating at A=Ai, for i{1,2,3}. The resulting expressions are

C1=1pact2(c)ω+2pact(c)ωA1+A12A1-A2A1-A3,C2=1pact2(c)ω+2pact(c)ωA2+A22A2-A1A2-A3,C3=1pact2(c)ω+2pact(c)ωA3+A32A3-A1A3-A2. (C6)

We can then write the potential function V(A) as

VA=C1lnA-A1+C2lnA-A2+C3lnA-A3. (C7)

Since the auto-activation system derives from a gradient, we can apply classical results from one-dimensional gradient dynamics: namely, that the number of stable steady states is equal to the number of unstable steady states plus one [68]. Indeed let’s assume that

dV(A)dAA=Ai=0 (C8)

at finitely many points i[1,n] and

d2V(A)dA2A=Ai0 (C9)

at those points (the stable points are not degenerate).

Figure 28:

Figure 28:

Minimal and maximal values of effector concentration between which the system is bistable. Given baseline parameter values ω=7.5, r0=0.1, r1=1, r2=20, each figure varies a different parameter, keeping all others fixed. For each panel, the shaded region in red is the region of effector concentration for which there is bistability. The shaded region in orange denotes the analytically-bounded region of bistability. The dotted line in the first figure is an analytical lower bound for the minimal cooperativity required for bistability. Note that the analytic approach discussed here, summed up in Eqn. B32, invokes a necessary but not sufficient condition for bistability, and thus always encompasses a larger region of parameter space than the system’s observed region in red.

We take A1<<An. With two minima in V, the function must then reach a local maximum between the two to transition between these minima. We therefore see that the local minima and maxima of V must alternate. A last key point is why that the first and last extrema of V must be minima. If the first extremum of V were a maximum—corresponding to an unstable steady state—a small perturbation toward smaller A would drive the system toward the boundary of the domain, where no minimum of V exists and no steady state is defined. This would render the system ill-posed. A similar reasoning can be applied to understand why the last steady state also has to be a minimum. So we can apply this to our system. Intuitively, imagining our dynamical system as a one-dimensional energy landscape, two stable steady state “valleys” must be connected by an unstable steady state “hill.” Therefore, bistability implies that our system has three steady states, two stable and one unstable.

Appendix D: Auto-activation : No bistability at high cooperativity and rate r2.

As shown in Fig. 11, for sufficiently large values of ω and r2, the system does not exhibit bistability for any effector concentration c. In this section, we support this observation using bi-dimensional numerical parameter sweeps and provide analytical arguments explaining its origin.

In Fig. 29, we report the maximal cooperativity ω above which the system is monostable for all values of effector concentration. The rate parameters are varied two at a time while keeping the third fixed. For each triplet (r0,r1,r2), we sample all effector concentrations by sweeping over values of pact between leakiness and saturation. We then determine the maximal value of ω for which the system is bistable for at least one value of c. These parameter sweeps reveal a finite—but potentially large—upper bound on cooperativity beyond which bistability is lost. The yellow regions in Fig. 29(AB) indicate that no upper bound was found within our sampled cooperativity range (101 to 109); this absence does not imply the bound does not exist, but rather reflects the limits of our numerical exploration, which did not extend beyond 109 due to sampling choices and diminishing biophysical relevance. However, since a finite bound exists in other parts of parameter space, we hypothesize that such a bound also exists in these regions. In the next section, we confirm this analytically. Interestingly, the appearance of yellow regions in Fig. 29(AB)—where no numerical upper bound on cooperativity is observed—correlates with increasing values of r2, consistent with the fact that raising r2 initially promotes bistability. While these bounds appear only at very high cooperativities (typically ω>102), and may exceed biologically plausible values, they nonetheless depend on the system’s rate parameters and could be lower in other settings.

We explain this behavior analytically. As discussed in Appendix B2, bistability can be assessed by examining the number of non-negative roots of the steady-state polynomial Eqn. B1. To be bistable, the system has to admit more than one steady state, which corresponds to the polynomial having three real non-negative roots. A necessary condition for this, is that the polynomial has three real roots, regardless of their sign. While this condition does not guarantee bistability—since some of the roots may be negative—it is nonetheless governed directly by the sign of the polynomial’s discriminant. For a general cubic polynomial

Qx=ax3+bx2+cx+d, (D1)

the discriminant is given by

Δ=b2c2-4ac3-4b3d-27a2d2+18abcd. (D2)

If Δ>0, the polynomial has three distinct real roots; if Δ<0, it has only one real root; and if Δ=0, it has at least one repeated root.

Figure 29:

Figure 29:

Parameter space exploration of the maximal cooperativity ωmaxbistable above which the system becomes monostable for all effector concentrations. The cooperativity ω is sampled over the interval ω101,109. Effector concentrations are effectively scanned from 0 to by sweeping pact between its biologically constrained bounds: the leakiness level pactmin=11+e-βεKc2 and the saturation level pactmax=11+e-βε, with fixed parameters βε=4.5 and Kc=2.6×102. Three two-dimensional parameter sweeps are performed. In panel (A), (r1,r2) are varied in r0,105×r0,105 with r0=0.1 held constant. In panel (B), (r0,r2) are varied in [10-5,r1]×r1,105 with r1=1 fixed. In panel (C), (r0,r1) are varied in 10-5,r2×10-5,r2 with r2=20 fixed. Regions shaded in gray correspond to parameter combinations that violate the auto-activation condition r0r1r2, and for which the system no longer functions as an auto-activating unit. In regions where no maximal cooperativity values for bistability are reported, the system remains monostable across the entire range of cooperativity values sampled.

For the auto-activation system, letting pact(c)=p, the discriminant of the steady-state polynomial can be written as

Δ=p2[4+32pr0+16pr1-1+pr1+4ω-1+2pr11+9pr0+4pr1-1+pr1-4ωp1+4p3r0+r1-1+pr1r2-4ω3p4r0r23+ω2p2-27r02+1-2pr12r22+6r0r23-6pr1+4pr2]. (D3)

We now study the asymptotic behavior of this discriminant in the limits of infinite ω, r2, and r0. We do not consider the limit of infinite r1, since, according to the bounds in Eqns. 1012, a necessary condition for bistability is that r1 remains below a threshold set by the other system parameters. Therefore, in this limit, the system is necessarily monostable.

In each of the asymptotic limits, we derive the leading-order term of the discriminant and infer the discriminant diverges negatively. Respectively for ω,

Δω-4p6r0r23ω3,limωΔ=-, (D4)

for r2¯,

Δr2-4p6r0r23ω3,limr2Δ=-, (D5)

and for r0,

Δr0-27ω2p4r02,limr0Δ=-. (D6)

These asymptotic results indicate that bistability becomes impossible in the limit of arbitrarily large ω, r2, or r0.

Since a negative discriminant implies that the polynomial admits only one real root, the system is necessarily monostable in these asymptotic regimes. This analytically supports the existence of upper bounds on ω and r2 observed in Fig. 29 and Fig. 11(A,D), consistent with the scaling behaviors shown in Eqns.D4 and D5. Moreover, the symmetric structure of these leading-order terms highlights the seemingly interchangeable roles of ω and r2 in promoting bistability. The loss of bistability for large r0, as seen in Fig. 11(B), is similarly explained by the negative divergence of the discriminant in Eqn. D6.

Appendix E: Conditions for activation in auto-activation circuit

We define the range of parameters on which we will focus in the setting of the study of auto-activation. In this framework, the production term in Eqn. 9, which we will refer to as y(A), must be monotonically increasing. In other words, we want dy/dA0 for all A0. To simplify the computation, let x=pact(c)A. We then have

dydx=1pact(c)dydA, (E1)

and the condition thus becomes dy/dx0 for all x0.

Writing down the expression of y(x),

yx=r0+2r1x+ωr2x21+2x+ωx2, (E2)

we compute the derivative

dydx=2r1-r0+xωr2-r0+x2ωr2-r1(1+x(2+xω))2. (E3)

For this expression to be non-negative for all x, it must be non-negative for x=0 and for x. This then requires that r2r1 and r1r0, further implying that r2r0. This is enough to assert that y(x) is an increasing function of x as its derivative is always non-negative for all x0.

Appendix F: Auto-activation: Supplement for the comparison between Hill and thermodynamic model

In this section, we provide support for some of the claims made in the discussion comparing thermodynamic models and the Hill function approach in the context of auto-activation in Sec. III A 2,

First, we address the claim that the Hill function formulation produces similar probabilities of state of transcription factor binding. In Fig. 30, we illustrate such probabilities as a function of active activator concentration Aact=pact(c)A. For example, the probability of no activator bound in the thermodynamic model is given by

11+2Aact+ωAact2. (F1)

Here, we treat the Hill function as approximating away the state where one activator is bound, and only allows zero or two activators to be bound. We observe that the probability of no activator bound (blue curve) and the probability of two activators bound (green curve) have similar shapes in both models. The two models differ slightly in the region where the green and blue curves intersect which signifies the transition region of the genetic switch. In the thermodynamic model, the green and blue curves typically intersect at a lower probability due to the existence of the state where one activator is bound, and the Aact at which they intersect can be tuned by ω. Nevertheless, it is difficult to intuit the drastic differences in bifurcation curves between the two models as described in the main text from this picture, as the qualitative features of the probabilities of states in the two models resemble each other.

Second, in the main text (Fig. 12(B)), we illustrated the behavior of the Hill function formulation using a representative example with Hill coefficient n=2. To support the generality of the observation that the critical effector concentration c—captured by the half-maximal effective concentration EC50—remains approximately fixed across Hill coefficients, we include in Fig. 31 additional bifurcation curves for n=2,5,10. These curves demonstrate that EC50 varies little with n for Hill curves. By contrast, in the thermodynamic model, the location of EC50 is sensitive to parameters such as r1 and ω, as shown in Fig. 12(B).

Appendix G: Auto-activation : Bistability is possible for non cooperative systems (ω=1).

1. Definition of the effective Hill coefficient

We first recall how to compute the Hill coefficient of an activating Hill function with constitutive expression, denoted g(x), defined by

gx=r0+r2xn1+xn. (G1)

Its log-derivative is

dlngdlnx=nxnr2-r01+xnr0+r2xn. (G2)

Let us define x* such that

gx*=r2+r02. (G3)

Figure 30:

Figure 30:

Comparing probabilities of activator binding states as a function of active activator concentration Aact in the thermodynamic and Hill model. (A) Thermodynamic model with cooperativity ω=1. (B) Thermodynamic model with ω=100. (C) Hill function model with Hill coefficient n=2. Blue curve corresponds to the state with no activator bound. Orange curve corresponds to the state with one activator bound. Green curve corresponds to the state with two activators bound. Notably, there is no orange curve in (C) as the Hill function approximates away the state with one activator bound.

Figure 31:

Figure 31:

Hill function predicts similar EC50 in their bifurcation curves for all Hill coefficients n. Parameters used here are the same as in Fig. 12(B), with r0=0.1 and r2=10. Bifurcation curves with different Hill coefficients are colored differently; blue curve: n=2; orange curve: n=5; green curve: n=10. It can be observed that the critical effector concentration c at which the activator concentration transitions from high to low steady state are similar for all three bifurcation curves.

This holds for x*=1. Evaluating the derivative at this value then gives

dlngdlnxx=x*=12nr2-r0r2+r0, (G4)

and solving for n, we obtain

n=dlngdlnxx=x*2r2+r0r0-r2. (G5)

We now define a similar expression for a thermodynamic model w(x) given by

wx=r0+2r1pactx+ωr2pact2x21+2pactx+ωpact2x2. (G6)

We observe that w(x) can be written as wˆpactx with

wˆxˆ=r0+2r1xˆ+ωr2xˆ21+2hxˆ+ωxˆ2, (G7)

where h=1 in our case. Since dlnw/dlnx=dlnwˆ/dlnxˆ when h=1, the effective Hill coefficient does not depend on pact. The derivative of wˆ is

dlnwˆdlnxˆ=2(1+hxˆ)1+xˆ(2h+ωxˆ)-2r0+r1xˆr0+xˆ2r1+ωr2xˆ. (G8)

Letting xˆ* be defined by wˆxˆ*=r2+r02 yields

xˆ*=-hr0+2r1-hr2r0-r2ω+S, (G9)

where

S=h2r02-4hr0r1+4r12+2h2r0r2-4hr1r2+h2r22+r02ω-2r0r2ω+r22ωr0-r22ω2. (G10)

This then gives the derivative at xˆ* as

dlnwˆdlnxˆxˆ=xˆ*=r2-r0r2+r0hr0+r2-2r1u+t2hr2-2r1u+t, (G11)

with

α=-2r1+hr0+r22r0-r22+4ω,u=-2r1+hr0+r2+-r0+r2α,t=4r0-r22ω. (G12)

We defined wˆ with an extra parameter h to allow a mapping between the thermodynamic model and the Hill function. For the Hill case, we set h=0, r1=0, and ω=1. In our thermodynamic model, h=1 and the expression of Eqn. G11 becomes

dlnwˆdlnxˆxˆ=xˆ*=r2-r0r2+r0r0+r2-2r1u+t2r2-2r1u+t, (G13)

with

α=-2r1+r0+r22r0-r22+4ω,u=-2r1+r0+r2+-r0+r2α,t=4r0-r22ω. (G14)

To ensure consistency with the Hill model case, we therefore define the effective Hill coefficient as

neff=dlnwdlnxx=x*2r2+r0r2-r0 (G15)

for

w(x)=r0+2r1pactx+ωr2pactx21+2pactx+ωpactx2 (G16)

and x* defined such that

wx*=r2+r02. (G17)

We note that for the auto activation system, in which r0r1r2, having an effective Hill coefficient larger than unity (neff>1) is equivalent to

ω>r1-r0r2-r1r2-r02. (G18)

Therefore, depending on the parameters of the system, the effective Hill coefficient can be larger or smaller than one, despite the activator having two binding sites.

2. Numerical sweep for minimal cooperativity above which there is bistability.

To better understand how the parameters of the auto-activation system constrain the emergence of bistability, we explore the minimal cooperativity ωminbistable required to observe bistability across a broad range of rate parameters. Specifically, we perform numerical parameter sweeps over (r0,r1,r2), systematically enforcing the auto-activation condition r0r1r2. Regions where this condition is violated are shaded in gray. For each valid triplet, we determine the minimal value of ω for which bistability occurs over a finite range of effector concentrations.

Fig. 32(A) illustrates the numerical method used to identify this minimum cooperativity. The resulting values are displayed in Fig. 32(B), where we observe that the required cooperativity varies significantly across parameter space. Notably, bistability can be achieved even in the case where ω1. Particularly where r2 is sufficiently large relative to r1 and r0. This includes cases where ω<1, which corresponds to anti-cooperative behavior—i.e., where the binding of the first activator decreases the likelihood of a second one binding and cases where ω=1, which corresponds to no cooperativity.

While cooperativity in the strict thermodynamic sense may not be required, the system still exhibits an effective nonlinearity sufficient to support bistability. To assess this, we compute the effective Hill coefficient of the production term, derived with a thermodynamical model, shown in Fig. 32(C).When the system is bistable, the effectively Hill coefficient always exceeds 1, consistent with theoretical expectations [78]. Furthermore, in Fig. 32(D), we evaluate an effective cooperativity based on the inequality ωr2/2>1, which serves as a necessary (though not sufficient) condition for bistability. The consistency of this bound with the numerically determined ωminbistable highlights its predictive value.

Together, these analyses reveal that bistability is not strictly dependent on cooperative binding in the classical sense, but rather emerges from the combined effects of system parameters—particularly the balance between production rates. This underscores the importance of kinetic tuning in biological systems and the potential for bistable behavior even in regimes of weak or anti-cooperative interactions.

Appendix H: Relaxation timescale to equilibrium for the auto-activation system

We examine the relaxation timescales to steady state in the auto-activation system as a function of the initial concentration of activator A, denoted A0. To define the timescale, we employ two different methods. The first method, referred to as the threshold approach, involves measuring the time it takes for the system to evolve from the initial condition to a fixed fraction of its steady state. We track the time-dependent trajectory A(t) and define the relaxation timescale as the time t* such that

At*-AiAf-Ai=ϵ, (H1)

where ϵ is the chosen threshold. In practice, since the system is simulated numerically over N discrete time points, the relaxation time is computed as the earliest sampled time ti for which the normalized deviation exceeds ϵ,

t*=minj[1,N]tjAtj-AiAf-Ai>ϵ. (H2)

Figure 32:

Figure 32:

Parameter space exploration tracking the minimal cooperativity required for bistability over a range of effector concentrations. The cooperativity ω is sampled over the interval ω1,105. The condition r0r1r2 is imposed to ensure auto-activation behavior; regions where this is not satisfied are shaded in gray. (A) Illustration of the method used to determine the minimal cooperativity required for bistability. (B) Minimal cooperativity values identified using the approach in (A). (C) Corresponding effective Hill coefficient. (D) Effective cooperativity estimated from the a necessary condition for bistability ωr2/2>1. In panels (B–D), subpanels (i)–(iii) show different slices of parameter space: (i) r0,r110-5,r22 with r2=20 fixed; (ii) r1,r2r0,1052 with r0=0.1 fixed; (iii) r0,r210-5,r1×r1,105 with r1=1 fixed. In regions where no minimal cooperativity values for bistability are reported, the system remains monostable across the entire range of cooperativity values sampled.

Figure 33:

Figure 33:

Relaxation timescales as a function of the initial concentration of gene AA0. The parameters of the system are fixed at the following values: ω=7.5,r0=0.1, r1=1, r2=20, and c=210-5M. Timescales are computed using two approaches: a threshold-based method (orange curves) and exponential curve fitting (blue curve). Each orange curve corresponds to a different threshold value ϵ indicated as percentages in the legend. The threshold timescale t* is the time required for A(t) to reach a fraction ϵ of the total change from initial condition A0 to steady state A. The blue curve represents the relaxation time obtained from exponential fits to A(t) trajectories. Vertical lines mark positions of stable (solid lines) and unstable (dashed line) fixed points. The horizontal black line indicates the unity timescale as a reference for comparison.

We report the relaxation timescale obtained using various values of ϵ in Fig. 33. All resulting curves exhibit similar behavior, indicating that the precise value of the threshold does not significantly affect the overall system dynamics.

We compare these threshold-based relaxation timescales with those obtained by fitting an exponential function to the trajectory A(t). If A(t) followed a purely exponential decay—as it does near stable fixed points—then the timescale from the exponential fit would match the threshold-based timescale for ϵ0.63. As shown in Fig. 33, the curve corresponding to the exponential fit closely matches the threshold-based curve with ϵ=0.63, even when the initial condition is far from the stable fixed points and the system is not strictly exponential, as illustrated in Fig. 13(A). Nevertheless, near the stable fixed points, we observe that the threshold method reports a longer timescale. This is because the initial condition is very close to that of the stable fixed point, and the system takes longer to cross the relative threshold. Therefore, we rely on the timescale obtained from the exponential fitting, as shown in Fig. 13(B).

Appendix I: Bistability regimes in the mutual repression circuit

1. A necessary condition for bistability

In the mutual repression system at steady state, we set the time derivatives to zero, leading to the equations

R1=r1+2p2R2+ω2p2R22 (I1)

and

R2=r1+2p1R1K+ω1p1R1K2, (I2)

where for convenience we define p1=pactc1 and p2=pactc2. By substituting the expression for R1 of Eqn. I1 into the Eqn. I2 for R2, Eqn. I2 then becomes

R2=r1+2p1Kr1+2p2R2+ω2p2R22+ω1p1Kr1+2p2R2+ω2p2R222. (I3)

We can rewrite Eqn. I3 in standard polynomial form, MR2=0, where

MR2=p24ω22R25+p23ω2(4-rω2p2)R24+2p22(2+ω2(1+r(p1K-2p2)))R23+4p2(1+r(p1K-p2-ω2p22))R22+(1+2p1rK+ω1p12r2K2-4p2r)R2-r. (I4)

To assess whether the system is monostable, we examine the number of non-negative roots of the polynomial MR2. If the second derivative of MR2 does not change sign, then the polynomial can have at most two real roots. In particular, if the polynomial is convex for all non-negative values of R2, i.e., MR2>0, then the system cannot be bistable. The second derivative of MR2 is given by

MR2=20p24ω22R23+12p23ω24-rω2p2R22+12p222+ω21+rp1K-2p2R2+8p21+rp1K-p2-ω2p22. (I5)

To guarantee that MR2>0 for all non-negative values of R2, we require that all coefficients in the polynomial expression of MR2 remain strictly positive. This condition translates into three distinct inequalities. First, the positivity of the quadratic term, requires that

4-rω2p2>0. (I6)

Next, positivity of the linear term imposes the constraint

2+ω21+rp1K-2p2>0. (I7)

Finally, the positivity of the constant term yields

1+rp1K-p2-ω2p22>0. (I8)

These conditions can be equivalently rewritten in terms of upper bounds on p2 and combinations of p1 and p2, yielding

p2<4rω2, (I9)
2p2-p1K<1r2ω2+1, (I10)
1+ω22p2-p1K<1r. (I11)

To ensure that the system remains monostable for all values of effector concentrations c1 and c2, we require that these inequalities hold for the maximum possible values for the different functions of c1 and c2. Thus, we obtain the sufficient conditions

pmax<4rω2, (I12)
2pmax-pminK<1r2ω2+1, (I13)
1+ω22pmax-pminK<1r, (I14)

Finally, in the special case where K=1, a sufficient conditions under which the system remains monostable for all effector concentrations simplify, using Mathematica, to

r<1pmax-pmin+ω2pmax/2. (I15)

Taking the contrapositive, we obtain a necessary condition for the system to exhibit bistability at some value of the effector concentration

r>1pmax-pmin+ω2pmax/2 (I16)

The bound stated in Eqn. I16 depends on both ω2 and r, again, similarly to the auto-activation system, acting together to determine wether bistability can be accessed or not.

For K=1 the two cooperativities play a symmetric role. Therefore necessary conditions for bistability are

r>1pmax-pmin+ω2pmax/2 (I17)

and

r>1pmax-pmin+ω1pmax/2. (I18)

From Eqns. I18 and I17 a necessary condition for bistability for K=1 is that

r>max1pmax-pmin+ω2pmax/2,1pmax-pmin+ω1pmax/2. (I19)

which simplifies to

r>1pmax-pmin+minω2,ω1pmax/2. (I20)

2. Effective Hill coefficient of the production terms

In the case of auto-activation, analyzing the effective Hill coefficient provided insight into how bistability can arise even in non-cooperative systems—ω>1 is not a necessary condition for bistability, but from our numerical sweeps displayed in Fig. 32, neff>1 is. Motivated by this, we now examine whether a similar criterion might help explain the restriction of bistability to specific zones of parameter space in mutual repression circuits.

We defined (Eqn. G15, Eqn. G17) and derived an analytical formula (Eqn. G14, Eqn. G13) for the effective Hill coefficient for a general production term

w(x)=r0+2r1pactx+ωr2pactx21+2pactx+ωpactx2, (I21)

in Appendix G1. In the mutual repression system, the production terms of interest are the production of R1 driven by promoter 1 and regulated by R2

f1R2=r1+2pactc2R2+ω2pactc2R22 (I22)

and the production of R2 driven by promoter 2 and regulated by R1

f2R1=r1+2(pactc1R1K)+ω1(pactc1R1K)2. (I23)

With r0=r, r1=0, and r2=0, pactpactc2 for f1R2 and pactpactc1/K for f2R1; we see that those two production terms fall in to the more general from w(x) of Eqn. [21, We can therefore apply the reasoning and algebra derived in Appendix G1. The corresponding effective Hill coefficients are given by

n1=2-1+4ω2-12ω2, (I24)

for the production term f1R2 and

n2=2-1+4ω1-12ω1, (I25)

for the production term f2R1. We see that the two expressions of the effective Hill coefficients have the same functional form. Therefore showing that n1>1 for all ω2>0, implies that n2>1 for all ω1>0. We therefore establish that n1>1 for all ω2>0, and n2>1 for all ω1>0 then follows. This amounts to showing

2ω2>1+4ω2-1. (I26)

Adding 1 to each side and squaring both sides gives

2ω2+12>1+4ω2. (I27)

Expanding the square and canceling identical terms leaves

4ω22>0, (I28)

which is true for every ω2>0.

We therefore showed that the effective Hill coefficients of the production terms are always greater than one in this system. As a result, while this observation is consistent with conventional expectations [78], it provides little discriminatory power for identifying regions of bistability, since the condition is satisfied across parameter space.

3. Effect of cooperativity on the bistability region

Fig. 34.(A)(i) shows how the geometry of the bistable region evolves as the cooperativity of repressor R2ω2 is varied, while ω1=7.5, K=1, and r=2 are held fixed. This corresponds to a symmetric case where both repressors bind their respective promoters with equal affinity. At low ω2, the system is monostable for all inducer concentrations, which is consistent with the known requirement for a minimal degree of nonlinearity to enable bistability. As ω2 increases beyond a threshold, bistability emerges, but initially in a constrained region where the tunability is mostly limited by c1. When ω2 lies approximately between 2 and 5, a nonzero concentration of c1 is required to inactivate a portion of the repressors R1, thereby reducing their ability to bind DNA efficiently. In this regime, we are still in a setting where ω1>ω2, meaning that R1 binds more strongly to the DNA than R2—as they have equal binding constants, the difference in binding arises solely from the cooperativity parameters. To support two distinct expression states—one with high R1 and low R2, and another with the reverse—the binding strength of R1 must be reduced. This enables a more balanced competition between the two repressors, making it possible for both stable states to coexist. When ω2 becomes large (e.g., ω220), the situation reverses: the bistable region in the (c1,c2) phase space becomes constrained along the c2 axis, as higher concentrations of inducer are required to counteract the strong DNA binding of R2.

For intermediate values of cooperativity, approximately between 5 and 20, the (c1,c2) phase space is less constrained. In this regime, the concentrations of c1 and c2 need to be small enough to maintain repression by R1 and R2. If either inducer concentration becomes too high, the system is not repressed anymore and only has a unique steady state with high concentrations of both repressors.

While increasing ω2 initially expands the bistable region and enhances its robustness, we find that beyond a certain threshold, further increases in cooperativity begin to shrink the bistable domain. This reflects a general principle also observed for the parameter K: pushing the system too far in one direction strongly constrains bistability. In the case of ω2, overly strong cooperativity amplifies the binding of R2 and therefore the repression of R1, so the system commits to one state, thereby reducing the range of inducer concentrations for which multiple steady states coexist.

Fig. 34.(A)(ii) explores the impact of tuning ω2 in an asymmetric setting, where K=0.7, ω1=7.5, and r=2 are fixed. In this case, repressor R1 binds more tightly to its promoter than R2 does, breaking the symmetry observed in Fig. 34(A)(i). At low values of ω2, the system is monostable, consistent with insufficient nonlinearity to support multiple steady states. As ω1 increases, bistability appears, but the geometry of the bistable region is notably skewed. Compared to Fig. 34(A)(i), it is interesting to note that the bistability region at higher cooperativity is larger than in the symmetric case for the range of inducer concentrations considered. Finally in Fig. 34 (B), we tune the cooperativity ω1 instead of ω2 as was done in the previous panel. Therefore the roles of c1 and c2 are mirrored.

Appendix J: Separatrix for mutual repression

We recall the differential equations governing the mutual repression system,

dR1dt=-R1+r11+2pactc2R2+ω2pactc2R22=FR1,R2, (J1)
dR2dt=-R2+r11+2pactc1R1K+ω1pactc1R1K2=GR1,R2. (J2)

The separatrix is defined as the curve R2R1 that satisfies the differential equation

dR2dR1=GR1,R2FR1,R2, (J3)

which tracks the trajectory along which the system transitions between the basins of attraction of the two stable steady states.

Figure 34:

Figure 34:

Bistability regimes in mutual repression as a function of cooperativity. Colored regions denote distinct bistable phase space geometries, defined by whether bistability occurs at very small c1c1=10-7M, very small c2(c2=10-7M), both, or neither. (A) Evolution of the geometry of the bistability phase space, sweeping on inducer concentrations (c1,c2), for fixed ω1=7.5 and r=2 and respectively K=1 and K=0.7 for (i) and (ii), when the parameter ω2 is varied. (B) Evolution of the geometry of the bistability phase space, sweeping on inducer concentrations (c1,c2), for fixed ω2=7.5, r=2 and K=0.7, when the parameter ω1 is varied.

Appendix K: Coherent feed-forward loop response to a step function signal

1. Analytical solution for the output Z(t)

We now rewrite the dynamical equations of the coherent feed-forward loop. In particular, we introduce a simplifying notation for the activation terms for gene products Y and Z. The reason such a definition is useful is that these terms are independent of Y and Z themselves and depend only upon X itself and the concentration of inducer. To that end, we write the dynamical equations for Y and Z as

dYdt=-Y+fY(t) (K1)
dZdt=-Z+fZt, (K2)

with the simplifying notation

fYt=r0Y+r1YpactXcXtX1+pactXcXtX, (K3)

and

fZt=r0Z+r1Z𝒳t+𝒴t+ωr2Z𝒳t𝒴t1+𝒳t+𝒴t+ω𝒳t𝒴t. (K4)

We recall that the bar indicates quantities where time is measured in units of 1/γ, and where concentration and dissociation constants are measured in units of KXY. The rates are then in units of γKXY. The notations 𝒳 and 𝒴 and are defined as 𝒳(t)=pactXcX(t)X/KXZ and 𝒴(t)=pactYcYY(t)/KYZ. We study the response of the coherent feed-forward loop to a step function in effector concentration acting on X, namely,

cX(t)=cXiift0,cXfift>0. (K5)

The step in the active concentration of X and the rescaled concentration 𝒳 are themselves subject to a step and can be written as

𝒳(t)=𝒳i=pactXcXiXKXZift0,𝒳f=pactXcXfXKXZift>0. (K6)

The concentration of effector acting on Y, cY(t) is taken to be constant cY(t)=cY0. Our goal here is to solve for the feed-forward dynamics analytically and obtain insights into the system on the basis of such a solution. Such a solution is possible because Eqns. 25 and 26 have a simple form, the time derivative of a variable equals the negative of itself plus a function of time,

dG(t)dt=-Gt+ft. (K7)

Such equations can be solved in their most general form as

Gt=e-tG0+0tetftdt. (K8)

In the cases of interest here, G(t) is either Y(t) or Z(t), and f(t) correspondingly is either fY(t) or fZ(t), the activation terms for Y and Z, respectively, in Eqns. K2 and K1.

We first solve Eqn. K1, because its dynamics is not coupled to Z(t). Notice that for t>0, fY(t) is constant. Referring to Eqn. K8, we see that Y evolves from initial to final state purely exponentially according to the time evolution

Yt=-ΔYe-t+Yf. (K9)

Here ΔY=Yf-Yi is the difference between the final and initial concentration of Y. We write the explicit expression of those initial and final steady states hereafter,

Yi=r0Y+r1YpactXcXiX1+pactXcXiX (K10)
Yf=r0Y+r1YpactXcXfX1+pactXcXfX. (K11)

As pactYcY(t)=pactYcY0 is constant, Y𝒴 is a proportional mapping. Therefore, like Y(t), 𝒴(t) also evolves exponentially in time by the similar form

𝒴t=-Δ𝒴e-t+𝒴f, (K12)

with 𝒴f/i=pactYcY0Yf/i/KYZ and Δ𝒴=𝒴f-𝒴i. Given this expression for 𝒴(t), we now know the full time dependence of fZ(t) in Eqn. K2. Next, we can solve for the dynamics of Z by substituting fZ(t) into the general solution Eqn. K8, By evaluating the integral, we find that

Z(t)=Zie-t+e-t0tetfZtdt=Zie-t+Zf(1-e-t)+Θ(t)=Zsimplet+Θt, (K13)

with

Θ(t)=-ΦΔ𝒴S2e-tlogSet-Δ𝒴1+ω𝒳fS-Δ𝒴1+ω𝒳f,Φ=ω𝒳f2r2Z-r1Z+ω𝒳fr2Z-r0Z+r1Z-r0Z,S=1+𝒳f+𝒴f+ω𝒳f𝒴f, (K14)

as shown in the main text. The solution of Z(t) cleanly splits into two parts. The first two terms describe the exponential behavior one expects from simple regulation

Zsimple(t)=Zie-t+Zf(1-e-t) (K15)

with Zi and Zf, respectively the initial and final steady state concentration of the output Z. Their explicit expression is given by

Zi=r0Z+r1Z𝒳i+𝒴i+ωr2Z𝒳i𝒴i1+𝒳i+𝒴i+ω𝒳i𝒴i (K16)
Zf=r0Z+r1Z𝒳f+𝒴f+ωr2Z𝒳f𝒴f1+𝒳f+𝒴f+ω𝒳f𝒴f. (K17)

We see that Θ(t) accounts for the difference between the feed-forward trajectory Z and the simple regulation trajectory Zsimple. As a sanity check, we see Θ(t)=0 when t=0 and t, confirming that the feed-forward loop and the simple regulation trajectory have the same initial and final state, as expected.

2. Derivation and sign of the average delay Δt

From the analytical expression of Z(t), we can then also derive the average time delay from the offset Θ(t)

Δt=1Zf-Zi0Θtdt. (K18)

With Eqn. K14, we can analytically evaluate the integral and obtain the following expression for the average delay

Δt=ΦZf-Zi-1S1+ω𝒳flog1+𝒳f+𝒴i+ω𝒳f𝒴iS. (K19)

As a reminder, Δt signifies the average time difference between the feed-forward loop response and the simple regulation response. The sign of Δt indicates whether the feed-forward loop delays (Δt<0) or accelerates (Δt>0). From Eqn. K19, we can analytically determine whether the feed-forward loop delays or accelerates by treating the contribution from each component. To begin with, we have S>0 and 1+ω𝒳f>0 as concentrations are strictly non-negative. For the coherent feed-forward loop, we have Φ0 as r2Zr1Zr0Z since both X and Y activate Z. The term Zf-Zi depends on the direction of the step. For an ON step, Zf-Zi>0 and for an OFF step, Zf-Zi<0. Finally, the logarithm also depends on the direction of the step. For an ON step, we have

1+𝒳f+𝒴i+ω𝒳f𝒴i1+𝒳f+𝒴f+ω𝒳f𝒴f=S (K20)
log1+𝒳f+𝒴i+ω𝒳f𝒴iS0, (K21)

since 𝒳f𝒳i and 𝒴f𝒴i. Combined with the effect of other terms, we find Δt0 for an ON step. Similarly, for the OFF step, we have

1+𝒳f+𝒴i+ω𝒳f𝒴i1+𝒳f+𝒴f+ω𝒳f𝒴f=S (K22)
log1+𝒳f+𝒴i+ω𝒳f𝒴iS0. (K23)

Together with other terms, we find Δt0 for the OFF step as well.

We have shown that the average time difference Δt is negative for both the ON and OFF steps. To complete the analysis, we will further demonstrate that the time difference Δt(Z), defined in Sec. IV A, has the same sign for any concentration Z. This amounts to saying that the feed-forward trajectory and the simple regulation trajectory never cross each other. We can show this by proving that Θ(t) has the same sign for any t>0. In Eqn. K14, we observe that the time dependence in Θ(t) appears in e-t and in Set inside the logarithm. e-t>0 for any t, thus only the logarithm term might change its sign as time evolves. However, we observe that

S-Δ𝒴(1+ω𝒳)=1+𝒳f+𝒴i+ω𝒳f𝒴i>0. (K24)

In other words, S>Δ𝒴(1+ω𝒳). Since et>1 for any t>0, it is always true that Set>Δ𝒴(1+ω𝒳). Therefore, the sign of the logarithm does not change with time. Thus, we have proven that Δt(Z) has the same sign as Δt for any concentration Z.

3. Logic gates analysis in coherent feed-forward loop

In this appendix section, we will prove that for the XOR gate in the coherent feed-forward loop, the OFF step delay is always greater than the ON step delay. In the XOR limit, ω=0. Eqn. 38 simplifies to

Δt=1Zf-Zir1Z-r0Z1+𝒳f+𝒴flog1+𝒳f+𝒴i1+𝒳f+𝒴f. (K25)

The time delay Δt is different for ON and OFF steps because 𝒳 and 𝒴 values at t=0 and t= are different for ON and OFF steps. Specifically, for ON step, 𝒳f=𝒳max and 𝒴f=𝒴max; while for OFF step 𝒳f=𝒳min and 𝒴f=𝒴min. Let’s now compare the time delay Δt for ON and OFF steps by taking their ratio

The ratio between Δt for OFF and ON steps is then

ΔtOFFΔtON=1+𝒳max+𝒴max1+𝒳min+𝒴minlog1+𝒳min+𝒴max1+𝒳min+𝒴minlog1+𝒳max+𝒴min1+𝒳max+𝒴max=1+𝒳max+𝒴max1+𝒳min+𝒴minlog1+𝒳min+𝒴max1+𝒳min+𝒴minlog1+𝒳max+𝒴max1+𝒳max+𝒴min (K26)

Note that 1+𝒳max+𝒴max1+𝒳min+𝒴min, and that

1+𝒳min+𝒴max1+𝒳min+𝒴min1+𝒳max+𝒴max1+𝒳max+𝒴min, (K27)

because for any fraction a/b where ab>0, a/b(a+c)/(b+c) for any c0. Here, a=1+𝒳min+𝒴max,b=1+𝒳min+𝒴min, and c=𝒳max-𝒳min. As logarithm is an increasing function, this means

log1+𝒳min+𝒴max1+𝒳min+𝒴minlog1+𝒳max+𝒴max1+𝒳max+𝒴min (K28)

Therefore, the ratio ΔtOFF/ΔtON1. We have demonstrated that for the XOR gate, the OFF step delay is larger than the ON step delay. We cannot say much analytically about the magnitude of the ratio of delay. As we see in Fig. 22(A), this ratio can range anywhere from close to 1 to a large number.

From here, we might be tempted to argue that for the AND gate, the ratio always satisfies ΔtOFF/ΔtON1. However, Fig. 22(B)(iii) provides a counter example. The statement for AND gate is thus false.

4. Analytic solutions of other feed-forward loop networks.

In any feed-forward loop, X regulates Y and Z, and Y regulates Z. As there are three regulatory interactions, there exist 23=8 different regulatory logics. For the dynamical equation of Y, if X activates Y, then

dYdt=-Y+r0Y+r1YpactXcXX1+pactXcXX. (K29)

Otherwise, if X represses Y, then

dYdt=-Y+r0Y1+pactXcXX. (K30)

There exist 4 different possibilities for how X and Y regulate Z. In the main text, we showed the case where X and Y both activate Z, and the case where X activates Z but Y represses Z. If X represses Z, but Y activates Z, we have

dZdt=-Z+r0Z+r1Z𝒴1+𝒳+𝒴+ω𝒳𝒴. (K31)

If both X and Y represses Z, then

dZdt=-Z+r0Z1+𝒳+𝒴+ω𝒳𝒴. (K32)

Using the procedures described in the main text, we can similarly find analytic solutions describing all the other regulatory logics of the feed-forward architectures. Note that for a step function signal in cX and cY, we can always write

𝒴t=-Δ𝒴e-t+𝒴f. (K33)

Thus, we can compute the analytic expression of Z without worrying about how Y is regulated. We find that the solutions all have the same form, except with a different Φ for each architecture of logic. Specifically, we have

ΦXY=ω𝒳2r2Z-r1Z+ω𝒳r2Z-r0Z+r1Z-r0Z (K34)
ΦX=-r1Z𝒳+r0Z(1+ω𝒳) (K35)
ΦY=r1Z(1+𝒳)-r0Z(1+ω𝒳) (K36)
Φ0=-r0Z1+ω𝒳, (K37)

where the subscript on Φ indicates which TF activates Z.𝒳=𝒳f is the value of 𝒳 after the step function jump. Interestingly, we can write these expressions as

ΦXY=ω𝒳(1+𝒳)r2Z+ΦX+ΦY-Φ0 (K38)
ΦX=-r1Z𝒳(1+ω𝒳)+Φ0 (K39)
ΦY=r1Z(1+𝒳)+Φ0 (K40)
Φ0=-r0Z1+ω𝒳. (K41)

A way to interpret this is that Φ gets a new term associated with a weight of a state when that state changes from no expression to expression.

Appendix L: Functionality condition comparison

In Sec. IVB, we mentioned that besides the time delay, another helpful criterion for a functional feed-forward loop is the existence of a large difference between the maximum and minimum steady state values of Z. Let’s define ΔZ=Zf-Zi. We want this to be big so that the dynamical change between low and high concentrations is meaningful. The steady state concentrations Zf and Zi have expressions given by

Zi=r0Z+r1Z𝒳i+𝒴i+ωr2Z𝒳i𝒴i1+𝒳i+𝒴i+ω𝒳i𝒴i (L1)
Zf=r0Z+r1Z𝒳f+𝒴f+ωr2Z𝒳f𝒴f1+𝒳f+𝒴f+ω𝒳f𝒴f. (L2)

We observe that both Zi and Zf scale linearly with the rate parameters riZ for i=0,1,2. For this reason, in the theoretical setting, a large ΔZ can always be obtained by tuning the rate parameters high. Therefore, we did not include the discussion of this criterion in the main text. Nevertheless, it might be worthwhile to demonstrate the dependence of ΔZ on the dissociation constants KXZ and KYZ. We perform similar sweeps as Fig. 22, except we plot ΔZ for each choice of KXZ,KYZ. The result is shown in Fig. 35. We employ the same logic gate parameters as in Fig. 22. We observe that the region of large relative ΔZ tends to have an L-shape. The XOR and AND gate have regions of large ΔZ that extend in opposite directions. While XOR gate tends to prefer weak binding, the AND gate benefits from strong binding. The OR gate shape is a superposition of the XOR and AND gate, which is perhaps not surprising since XOR and AND gates are in a sense the limit cases of OR gate. Note that the difference between the magnitude of ΔZ across logic gates is artificial. The absolute magnitude of OR gate ΔZ is large only because r2Z=10. We verify the previous claim that ΔZ scales linearly with production rates.

Figure 35:

Figure 35:

Systematic sweep to find region of large ΔZ=Zmax-Zmin. For every logic gate, we sweep across KXZ,KYZ10-3,103×10-3,103 and inspect the region of large relative ΔZ. (A) XOR gate. (B) AND gate. (C) OR gate. The XOR gate parameters are r0Y=r0Z=0, r1Y=r1Z=2, and ω=0. The AND gate parameters are r0Y=r0Z=r1Z=0, r1Y=r2Z=2, and ω=10. The OR gate parameters are r0Y=r0Z=0, r1Y=r1Z=2,r2Z=10, and ω=1. These are the same as in Fig. 22.

Regarding the average time delay Δt, we demonstrate computational evidence for the existence of an upper bound on Δt, given a step in pactXcX (fix the high and low values of pactX). The full set of tunable parameters of the system is riY, rjZ, ω, X, KXZ, KYZ, where i{0,1} and j{0,1,2}. They span a semi-infinite 9-dimensional parameter space. To sweep across this entire space is computationally prohibitive. As a result, we instead take a few 2-dimensional slices to illustrate the existence of the upper bound on Δt. Specifically, we pair-wise tune 5 different parameters: r0=r0Y=r0Z,r1=r1Y=r1Z, r2Z, ω, and X. For each parameter combination, we search in K-subspace (as in Fig. 22, KXZ,KYZ10-3,103×10-3,103 and find the combination of (KXZ,KYZ) that generates the maximum Δt and record the value of the largest time delay as Δtmax. In Fig. 36, we plot Δtmax as a function of 10 different pair-wise parameters. We find that all parameter combinations yield Δt<5.Δt remains finite when any parameter (when possible) is tuned towards .

Finally, we address the dependence of Δt on the leakiness and saturation of pactXcX. Here, we denote the minimal pactXcX in a step as pmin and the maximal pactXcX as pmax. After many experimentation with the numerics, we find that the saturation pmax plays a smaller role in determining Δtmax. The limit pmax=1 yields a similar Δtmax as pmax=0.95, and decreasing pmax only makes Δtmax smaller. Contrary to pmax, pmin plays a much bigger role. We demonstrate this dependency in Fig. 37. Here, we examine the OFF step delay in the XOR coherent feed-forward loop, as from previous sweeps in Fig. 22 and Fig. 36 we see that this tends to be the setting that generates the largest amount of delay. For a given pmin, we sweep across r1Y=r1Z=r1, X. For each combination of r1 and X, we again perform another sweep in KXZ,KYZ10-3,103×10-3,103 to find largest Δt. We see from Fig. 37 that as pmin decreases, the maximal Δtmax increases. We note, however, that Δtmax still converges computationally for finite pmin. Due to the constraint of the explicit effector function pactXcX, for any biological parameter pmin is finite. For this reason, we demonstrate the maximum delay corresponding to a biologically sensible set of parameters, and investigate its significance in the main text.

Figure 36:

Figure 36:

Systematically sweeping across all tunable parameters in coherent feed-forward loops to find the largest Δt for the step in pactXcX used throughout the feed-forward loop section, where cXmax=10-4 and cXmin=10-7. We pick a default set of parameters: r0Y=r0Z=r0=0, r1Y=r1Z=r1=1, r2Z=5, ω=5, X=1. Note that we set r0Y=r0Z and r1Y=r1Z to decrease the number of degrees of freedom in the parameter space without losing too much information. For every colormap, we select a pair of parameters, and perform sweeps on a grid of values, while the other parameters remain the default value. For each value pair, they are used to search in the space KXZ,KYZ10-3,103×10-3,103, and the maximum Δt is recorded.

Appendix M: Technical details in incoherent feed-forward loop

In this appendix, we will return to some technical details regarding pulses in the incoherent feed-forward loop, as presented in Sec. IVC To begin with, we discuss the quantity Δt in incoherent feed-forward loops. In the coherent feed-forward loop, the definition of Δt in Eqn. 36 and Eqn. 37 are equivalent, allowing us to interpret it as the average time delay across concentrations. Here, when Z does not exhibit a pulse, there is no difference from the coherent case. However, Eqn. 36 becomes ill-defined when Z exhibits a pulse. This is because the feed-forward loop response can reach Z in a way that the simple regulation curve cannot. A value can still be computed for Δt using Eqn. 37, but it contains information about the relative size of the pulse rather than acceleration. Because it has unit of time, we regard the direct magnitude of the pulse Zmax-Zf for an increasing Z response in the main text.

Figure 37:

Figure 37:

The effect of leakiness of pactXcX on Δt. We examine the dependence Δt on minimal pactXcX in a step, pmin, in the XOR gate setting. Parameters used are r0Y=r0Z=0, ω=0. The value of r1Y=r1Z=r1 and X are tuned to find the largest Δt in that parameter subspace. The detailed sweep process is identical to that in Fig. 36

Next, let’s derive the maximal average time acceleration Δtmax when a pulse does not exist. The green curve shown in Fig. 24(C) and (D) is the trajectory that has the largest acceleration; denote this trajectory to be Zstep(t). Mathematically, Zstep(t)=Zf for t>0. The simple regulation curve is again

Zsimplet=Zie-t+Zf(1-e-t). (M1)

From Eqn. 37, we can then compute the maximal average time acceleration as

Δtmax=1Zf-Zi0dtZstep(t)-Zsimple(t)=1Zf-Zi0dtZf-Zie-t-Zf(1-e-t)=1Zf-Zi0dtZf-Zie-t=Zf-ZiZf-Zi=1. (M2)

Finally, we discuss the definition of strong pulse used in Fig. 24. Numerically, we characterize a pulse to be strong when the maximum concentration that the transient Z(t) reaches, Zmax, satisfies Zmax-Zf/Zf-Zi>0.05 for an increasing Z response. The threshold of 0.05 is an arbitrary choice. Due to this threshold, there exists a region of trajectories that are technically pulses but are not strong enough to be considered. The trajectory in Fig. 24(C) in fact belongs to this region. It exhibits a pulse strictly speaking. Nevertheless, the magnitude of the pulse is vanishingly small. Functionally, it acts in the same manner as the trajectories that do not possess a pulse.

Appendix N: Effect of Y in continuous tuning

In Sec. IVD, we discussed the scenario where the signal cX(t) is no longer a step function, but a continuous function in time. We mentioned that the comparison with simple regulation in this case is subtle, as the arbitrary choice of value Y affects the shape of simple regulation response. Here, we expand on Fig. 25 where we repeat the numerical integration for three distinct values of Y, as shown in Fig. 38. For the fast tuning case, different choices of Y have no effect on the simple regulation trajectory. This is expected since in the limit case of a step function signal in cX, the value of Y strictly has no effect on the shape of the trajectory. As the rate of tuning cX slows down, the effect of Y becomes more and more pronounced. Specifically, as Y increases, the rescaled simple regulation curve “shrinks”. As a result, the magnitude of apparent delay/acceleration between the feed-forward loop and simple regulation response increases. Due to the artificial nature of the choice of Y in simple regulation, we cannot make a claim about the magnitude of delay/acceleration in the slow tuning limit. Nevertheless, our qualitative results stand. The choice of Y does not change the type of response to a step. For example, even though Y affects the magnitude of delay on the OFF step, the feed-forward loop in this case always delays on the OFF step.

Figure 38:

Figure 38:

Demonstrating the how the arbitrary choice of Y affects the simple regulation dynamics. We replicate Fig. 25 with different choices of Y. From top to bottom, Y is set to be 0.01 in (i), 1 in (ii), which matches the setting in Fig. 25, and 100 in (iii). From left to right, the rate of concentration is tuned the same way as in Fig. 25. System parameters are also identical to those in Fig. 25.

Appendix O: Code availability

All Jupyter notebooks used to generate graphs in figures throughout this paper are available [79].

References

  • [1].Kelvin L., Nineteenth Century Clouds over the Dynamical Theory of Heat and Light, Phil. Mag. 2, 1 (1901). [Google Scholar]
  • [2].Gamow G., Thirty Years that Shook Physics: The Story of Quantum Theory (Dover Publications, New York, 1985). [Google Scholar]
  • [3].Segre E., From X-rays to Quarks : Modern Physicists and Their Discoveries (Dover Publications, Mineola, N.Y., 2007). [Google Scholar]
  • [4].Longair M. S., Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics (Cambridge University Press,, 2020). [Google Scholar]
  • [5].Judson H. F., The Eighth Day of Creation (Cold Spring Harbor Laboratory Press, New York, 1996). [Google Scholar]
  • [6].Novick A. and Weiner M., Enzyme induction as an all-or-none phenomenon, Proc. Natl. Acad. Sci. USA 43, 553 (1957). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Barnett J. A., A history of research on yeasts 7: enzymic adaptation and regulation, Yeast 21, 703 (2004). [DOI] [PubMed] [Google Scholar]
  • [8].Monod J., The Growth of Bacterial Cultures, Ann. Rev. Microbiol. 3, 371 (1949). [Google Scholar]
  • [9].Echols H. and Gross C., Operators and Promoters: The Story of Molecular Biology and Its Creators (University of California Press, Berkeley, 2001). [Google Scholar]
  • [10].Jacob F. and Monod J., Genetic regulatory mechanisms in the synthesis of proteins, J. Mol. Biol. 3, 318 (1961). [DOI] [PubMed] [Google Scholar]
  • [11].Müller-Hill B., The lac Operon: A Short History of a Genetic Paradigm (Walter de Gruyter, Berlin, New York, 1996). [Google Scholar]
  • [12].Englesberg E., Irr J., Power J., and Lee N., Positive control of enzyme synthesis by gene C in the L-arabinose system, J. Bacteriol. 90, 946 (1965). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Schleif R. F., Genetics and Molecular Biology, 2nd ed. (Johns Hopkins University Press, Baltimore, 1993). [Google Scholar]
  • [14].Britten R. J. and Davidson E. H., Gene regulation for higher cells: a theory, Science 165, 349 (1969). [DOI] [PubMed] [Google Scholar]
  • [15].Ptashne M., A Genetic Switch: Phage Lambda Revisited, 3rd ed. (Cold Spring Harbor Laboratory Press, Cold Spring Harbor, N.Y., 2004). [Google Scholar]
  • [16].Ptashne M. and Gann A., Genes and Signals (Cold Spring Harbor Laboratory Press, New York, 2002). [Google Scholar]
  • [17].Zuniga A. and Zeller R., In Turing’s hands—the making of digits, Science 345, 516 (2014). [DOI] [PubMed] [Google Scholar]
  • [18].Corson F. and Siggia E. D., Geometry, epistasis, and developmental patterning, Proc. Nat. Acad. Sci. USA 109, 5568 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Schindler A. J. and Sherwood D. R., Morphogenesis of the Caenorhabditis elegans vulva, Wiley Interdiscip. Rev. Dev. Biol. 2, 75 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Liu X., Huang J., Chen T., Wang Y., Xin S., Li J., Pei G., and Kang J., Yamanaka factors critically regulate the developmental signaling network in mouse embryonic stem cells, Cell Res. 18, 1177 (2008). [DOI] [PubMed] [Google Scholar]
  • [21].Ackers G. K., Johnson A. D., and Shea M. A., Quantitative model for gene regulation by lambda phage repressor, Proc. Natl. Acad. Sci. USA 79, 1129 (1982). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Shea M. A. and Ackers G. K., The OR control system of bacteriophage lambda. A physical-chemical model for gene regulation, J. Mol. Biol. 181, 211 (1985). [DOI] [PubMed] [Google Scholar]
  • [23].Buchler N. E., Gerland U., and Hwa T., On schemes of combinatorial transcription logic, Proc. Natl. Acad. Sci. USA 100, 5136 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Oehler S., Amouyal M., Kolkhof P., von Wilcken-Bergmann B., and Müller-Hill B., Quality and position of the three lac operators of E. coli define efficiency of repression, EMBO J. 13, 3348 (1994). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Müller J., Oehler S., and Müller-Hill B., Repression of lac promoter as a function of distance, phase and quality of an auxiliary lac operator, J. Mol. Biol. 257, 21 (1996). [DOI] [PubMed] [Google Scholar]
  • [26].Vilar J. M. and Leibler S., DNA looping and physical constraints on transcription regulation, J. Mol. Biol. 331, 981 (2003). [DOI] [PubMed] [Google Scholar]
  • [27].Vilar J. M., Guet C. C., and Leibler S., Modeling network dynamics: the lac operon, a case study, J. Cell Biol. 161, 471 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Kuhlman T., Zhang Z., Saier M. H. Jr., and Hwa T., Combinatorial transcriptional control of the lactose operon of Escherichia coli, Proc. Natl. Acad. Sci. USA 104, 6043 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Gardner T. S., Cantor C. R., and Collins J. J., Construction of a genetic toggle switch in Escherichia coli, Nature 403, 339 (2000). [DOI] [PubMed] [Google Scholar]
  • [30].Elowitz M. B. and Leibler S., A synthetic oscillatory network of transcriptional regulators, Nature 403, 335 (2000). [DOI] [PubMed] [Google Scholar]
  • [31].Ozbudak E. M., Thattai M., Lim H. N., Shraiman B. I., and Van Oudenaarden A., Multistability in the lactose utilization network of Escherichia coli, Nature 427, 737 (2004). [DOI] [PubMed] [Google Scholar]
  • [32].Ullmann A., In memoriam: Jacques Monod (1910–1976), Genome Biol. Evol. 3, 1025 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [33].Gerhart J. C. and Pardee A. B., Enzymology of control by feedback inhibition, J. Biol. Chem. 237, 891 (1962). [PubMed] [Google Scholar]
  • [34].Monod J., Changeux J., and Jacob F., Allosteric proteins and cellular control systems, J. Mol. Biol. 6, 306 (1963). [DOI] [PubMed] [Google Scholar]
  • [35].Monod J., Wyman J., and Changeux J. P., On the nature of allosteric transitions: a plausible model, J. Mol. Biol. 12, 88 (1965). [DOI] [PubMed] [Google Scholar]
  • [36].Martins B. M. and Swain P. S., Trade-offs and constraints in allosteric sensing, PLoS Comput. Biol. 7, e1002261 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Marzen S., Garcia H. G., and Phillips R., Statistical mechanics of Monod-Wyman-Changeux (MWC) models, J. Mol. Biol. 425, 1433 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Changeux J. P., 50 years of allosteric interactions: the twists and turns of the models, Nat. Rev. Mol. Cell Biol. 14, 819 (2013). [DOI] [PubMed] [Google Scholar]
  • [39].Gerhart J., From feedback inhibition to allostery: the enduring example of aspartate transcarbamoylase, FEBS J. 281, 612 (2014). [DOI] [PubMed] [Google Scholar]
  • [40].Phillips R., The Molecular Switch: Signaling and Allostery (Princeton University Press, 2020). [Google Scholar]
  • [41].Goodwin B. C., Temporal Organization in Cells: A Dynamic Theory of Cellular Control Processes (Academic Press, London, New York,, 1963). [Google Scholar]
  • [42].Cherry J. L. and Adler F. R., How to make a biological switch, J. Theor. Biol. 203, 117 (2000). [DOI] [PubMed] [Google Scholar]
  • [43].Alon U., An Introduction to Systems Biology: Design Principles of Biological Circuits, 2nd ed. (CRC Press, Taylor & Francis Group, Boca Raton, 2020). [Google Scholar]
  • [44].Covert M., Fundamentals of Systems Biology: From Synthetic Circuits to Whole-Cell Models (CRC Press, Taylor & Francis Group, Boca Raton, 2015). [Google Scholar]
  • [45].Ferrell J. E., Systems Biology of Cell Signaling: Recurring Themes and Quantitative Models (CRC Press, Boca Raton, FL, 2022). [Google Scholar]
  • [46].Bintu L., Buchler N. E., Garcia H. G., Gerland U., Hwa T., Kondev J., and Phillips R., Transcriptional regulation by the numbers: models, Curr. Opin. Genet. Dev. 15, 116 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [47].Bintu L., Buchler N. E., Garcia H. G., Gerland U., Hwa T., Kondev J., Kuhlman T., and Phillips R., Transcriptional regulation by the numbers: applications, Curr. Opin. Genet. Dev. 15, 125 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [48].Sherman M. S. and Cohen B. A., Thermodynamic state ensemble models of cis-regulation, PLoS Comput. Biol. 8, e1002407 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [49].Phillips R., Kondev J., Theriot J., and Garcia H. G., Physical biology of the cell, 2nd Edition (Garland Science, New York, 2013) (Illustrated by N. Orme). [Google Scholar]
  • [50].Sokolik C., Liu Y. X., Bauer D., McPherson J., Broeker M., Heimberg G., Qi L. S., Sivak D. A., and Thomson M., Transcription Factor Competition Allows Embryonic Stem Cells to Distinguish Authentic Signals from Noise, Cell Systems 1, 117 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Mangan S. and Alon U., Structure and function of the feed-forward loop network motif, Proc. Nat. Acad. Sci. USA 100, 11980 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [52].Mangan S., Zaslaver A., and Alon U., The coherent feed-forward loop serves as a sign-sensitive delay element in transcription networks, J. Mol. Biol. 334, 197 (2003). [DOI] [PubMed] [Google Scholar]
  • [53].Mangan S., Itzkovitz S., Zaslaver A., and Alon U., The incoherent feed-forward loop accelerates the response-time of the gal system of Escherichia coli, J. Mol. Biol. 356, 1073 (2006). [DOI] [PubMed] [Google Scholar]
  • [54].Laslo P., Pongubala J. M., Lancki D. W., and Singh H., Gene regulatory networks directing myeloid and lymphoid cell fates within the immune system, in Seminars in immunology, Vol. 20 (Elsevier, 2008) pp. 228–235. [DOI] [PubMed] [Google Scholar]
  • [55].Chute J. P., Ross J. R., and McDonnell D. P., Minire-view: nuclear receptors, hematopoiesis, and stem cells, Molecular Endocrinology 24, 1 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [56].Carroll S., Grenier J., and Weatherbee S., From DNA to Diversity: Molecular Genetics and the Evolution of Animal Design (Blackwell Science, Malden, MA, 2001). [Google Scholar]
  • [57].Jaeger J., The gap gene network, Cell. Mol. Life Sci. 68, 243 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [58].Siegal-Gaskins D., Mejia-Guerra M. K., Smith G. D., and Grotewold E., Emergence of switch-like behavior in a large family of simple biochemical networks, PLoS Comput. Biol. 7, e1002039 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [59].Subsoontorn P., Kim J., and Winfree E., Bistability of an in vitro synthetic autoregulatory switch, arXiv preprint arXiv:1101.0723 (2011). [Google Scholar]
  • [60].Becskei A., Séraphin B., and Serrano L., Positive feedback in eukaryotic gene networks: cell differentiation by graded to binary response conversion, EMBO J. (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [61].Laxhuber K. S., Morrison M. J., Chure G., Belliveau N. M., Strandkvist C., Naughton K. L., and Phillips R., Theoretical investigation of a genetic switch for metabolic adaptation, PloS One 15, e0226453 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [62].Hill A. V., The possible effects of the aggregations of the molecules of haemoglobin on its dissociation curves, The Journal of Physiology 40, i (1910). [Google Scholar]
  • [63].Hill A. V., The combinations of haemoglobin with oxygen and with carbon monoxide. I, Biochem. J. 7, 471 (1913). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [64].Rousseau R. J. and Phillips R., Bifurcations and multi-stability in an inducible three-gene switch network, APS Joint March and April Meeting, Anaheim, CA, March 21, 2025. [Google Scholar]
  • [65].Tyson J. J., Chen K. C., and Novak B., Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cell, Curr. Opin. Cell Biol. 15, 221 (2003). [DOI] [PubMed] [Google Scholar]
  • [66].Mitrophanov A. Y., Hadley T. J., and Groisman E. A., Positive autoregulation shapes response timing and intensity in two-component signal transduction systems, J. Mol. Biol. 401, 671 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [67].Rosenfeld N., Elowitz M. B., and Alon U., Negative autoregulation speeds the response times of transcription networks, J. Mol. Biol. 323, 785 (2002). [DOI] [PubMed] [Google Scholar]
  • [68].Strogatz S. H., Nonlinear Dynamics and Chaos : With Applications to Physics, Biology, Chemistry, and Engineering, 2nd ed. (Westview Press, a member of the Perseus Books Group, Boulder, CO, 2015). [Google Scholar]
  • [69].Huang S., Guo Y. P., May G., and Enver T., Bifurcation dynamics in lineage-commitment in bipotent progenitor cells, Dev. Biol. 305, 695 (2007). [DOI] [PubMed] [Google Scholar]
  • [70].Pal M., Ghosh S., and Bose I., Functional characteristics of gene expression motifs with single and dual strategies of regulation, Biomed. Phys. Eng. Express 2, 025009 (2016). [Google Scholar]
  • [71].Chakravarty S. and Csikász-Nagy A., Systematic analysis of noise reduction properties of coupled and isolated feed-forward loops, PLoS Comput. Biol. 17, e1009622 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Gui R., Liu Q., Yao Y., Deng H., Ma C., Jia Y., and Yi M., Noise decomposition principle in a coherent feed-forward transcriptional regulatory loop, Front. Physiol. 7, 600 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [73].Ghosh B., Karmakar R., and Bose I., Noise characteristics of feed forward loops, Phys. Biol. 2, 36 (2005). [DOI] [PubMed] [Google Scholar]
  • [74].Milo R., Shen-Orr S., Itzkovitz S., Kashtan N., Chklovskii D., and Alon U., Network motifs: simple building blocks of complex networks, Science 298, 824 (2002). [DOI] [PubMed] [Google Scholar]
  • [75].Shen-Orr S. S., Milo R., Mangan S., and Alon U., Network motifs in the transcriptional regulation network of Escherichia coli, Nat. Genet. 31, 64 (2002). [DOI] [PubMed] [Google Scholar]
  • [76].Santos-Zavaleta A., Salgado H., Gama-Castro S., Sanchez-Perez M., Gomez-Romero L., Ledezma-Tejeida D., Garcia-Sotelo J. S., Alquicira-Hernandez K., Muniz-Rascado L. J., Pena-Loredo P., Ishida-Gutierrez C., Velazquez-Ramirez D. A., Del Moral-Chavez V., Bonavides-Martinez C., Mendez-Cruz C. F., Galagan J., and Collado-Vides J., RegulonDB v 10.5: tackling challenges to unify classic and high throughput knowledge of gene regulation in E. coli K-12, Nucleic Acids Res. 47, D212 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].Lindsley J. E. and Rutter J., Whence cometh the allosterome?, Proc. Nat. Acad. Sci. USA 103, 10533 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [78].Griffith J. S., Mathematics of cellular control processes i. negative feedback to one gene, J. Theor. Biol. 20, 202 (1968). [DOI] [PubMed] [Google Scholar]
  • [79].Supporting Python code is available on GitHub at github.com/RPGroup-PBoC/2025_inducers.

Articles from bioRxiv are provided here courtesy of Cold Spring Harbor Laboratory Preprints

RESOURCES