Summary
As we begin to design increasingly complex synthetic biomolecular systems, it is essential to develop rational design methodologies that yield predictable circuit performance. Here we apply mathematical tools from the theory of control and dynamical systems to yield practical insights into the architecture and function of a particular class of biological feedback circuit. Specifically, we show that it is possible to analytically characterize both the operating regime and performance tradeoffs of an antithetic integral feedback circuit architecture. Furthermore, we demonstrate how these principles can be applied to inform the design process of a particular synthetic feedback circuit.
Subject Areas: Biological Sciences, Systems Biology, In Silico Biology
Graphical Abstract

Highlights
-
•
Control theory is used to characterize tradeoffs in a biomolecular control circuits
-
•
We show how low-level circuit parameters relate to high-level performance metrics
-
•
We provide a set of architectural principles for analyzing synthetic feedback circuits
Biological Sciences; Systems Biology; In Silico Biology
Introduction
A present challenge in synthetic biology is to design circuits that not only perform a desired function, but also do so robustly. The difficulty in doing this arises in large part from the enormous amount of variability between both intracellular and extracellular environments (Elowitz and Leibler, 2000, Cardinale and Arkin, 2012, Paulsson, 2004, Swain et al., 2002, Lestas et al., 2010). Although it is becoming easier to quickly implement a given circuit architecture (Sun et al., 2013, Weber et al., 2011), ensuring that its performance is robust to parameter variations is still a time-consuming and challenging task (Potvin-Trottier et al., 2016). As synthetic circuits grow in size and complexity, it will be essential to develop a rational design methodology that allows biological engineers to easily identify the important design constraints for a given circuit and determine whether or not their desired behavior is feasible (Del Vecchio and Murray, 2015). We can draw inspiration from the study of natural biological circuits, where cells are frequently confronted with a large amount of variability, yet exhibit robust behavior at the system level (Goentoro and Kirschner, 2009, El-Samad et al., 2005, Cohen-Saidon et al., 2009, Barkai and Leibler, 1997, Yi et al., 2000, Chandra et al., 2011, Paszek et al., 2010).
In the design of electrical and mechanical systems this problem is often solved with the implementation of feedback control (Aström and Murray, 2008, Doyle et al., 2013), where the dynamics of a process are adjusted based on measurements of the system's state with the aim of achieving some performance goals. For example, in a commercial cruise control system an engineer may want the car to be able to rapidly track whatever reference speed the user desires by measuring the car's current velocity, while accelerating at a safe rate and not being too sensitive to small disturbances (e.g., road conditions). Similarly, a biological engineer may want the output concentration of a molecular species in a circuit to track an input signal (e.g., an inducer) with dynamics that are robust to parametric variability in reaction rates and the inherent noisiness of chemical kinetics (Hsiao et al., 2014).
Our primary focus here is a circuit architecture proposed by Briat et al. (2016), know as an antithetic integral feedback, that uses an irreversible binding mechanism to implement feedback control in a biomolecular circuit. This circuit immediately had a broad impact on the study of biological feedback systems, as strong binding is both abundant in natural biological contexts (Kampranis et al., 1999, Hamilton and Baulcombe, 1999, Zhou and Gottesman, 1998) and appears to be feasible to implement in synthetic networks (Qian et al., 2018, Qian and Del Vecchio, 2018, Franco et al., 2014, Nevozhay et al., 2009, Lillacci et al., 2018). For example, antithetic integral feedback can be implemented using sense-antisense mRNA pairs (Agrawal et al., 2018), sigma-antisigma factor pairs (Lillacci et al., 2017), or scaffold-antiscaffold pairs (Hsiao et al., 2014).
The purpose of this circuit is to control a process, a simple version of which is composed of the molecular species X1 and X2 with two control species Z1 and Z2 (Figure 1A). The goal is to set an external reference μ and have the concentration of the output species X2 robustly track it (Figure 1B).
Figure 1.
Characterizing Performance in the Antithetic Integral Feedback Circuit
(A) The circuit diagram for a class of antithetic integral feedback circuits, adapted from the model presented in Briat et al. (2016). Here we take X1 and X2 as the process species we are trying to control and Z1 and Z2 as the controller species. One notable addition is that we explicitly model degradation of the control species Z1 and Z2 at the rate γc. θ1 and θ2 represent the interconnection between the process species and the control species, k represents the X1-dependent synthesis rate of X2, and γp is the degradation rate of the process species. Finally, μ acts as an external reference that determines production of Z1 through which we would like to ultimately control the steady-state concentration of X2 and η is the rate at which Z1 and Z2 irreversibly bind to each other.
(B) A representative plot of the type of behavior we expect from the circuit, where the concentration of X2 tracks a changing reference μ. We see that different cells have the same overall behavior, but with slight variations due to noise. This plot highlights the performance characteristics of this particular implementation of the circuit. For example we see that, when tracking the reference, X2 has some overshoot of the target (red), a period of time it takes to respond to changes (blue), random fluctuations due to noise (yellow), and steady-state error (purple). Ideally, we would like to have a rational methodology to tune the circuit-level parameters of (A) to predictably control the system-level characteristics of (B).
The key property of this circuit is that it is able to implement robust perfect adaptation. This means that, at steady state, the concentration of X2 will be proportional to the reference μ (specifically, x2 = μ/θ2 on average). Importantly, the steady-state value of will be independent of every other parameter of the network, implying that its steady-state behavior is robust. We will focus first on studying a deterministic ordinary differential equation model of the circuit:
| (Equation 1a) |
| (Equation 1b) |
| (Equation 1c) |
| (Equation 1d) |
Here θ1, θ2, and k are production rates; γp and γc are degradation rates for the process species and controller species, respectively; η is the rate at which Z1 and Z2 sequester each other; and μ is the reference input that sets the synthesis rate of Z1. For simplicity we assume that X1 and X2 share the same degradation rate γp, and likewise for Z1 and Z2 with respect to γc. Depending on the process being modeled, it may be more realistic to the have heterogeneous and potentially nonlinear rates for each species; however, the mathematical results are much simpler and easier to interpret in the setting presented here. We analyze a particular circuit with heterogeneous process degradation in Olsman et al. (2018), and a more general discussion of this problem is presented in Baetica et al. (2018).
We will refer to X1 and X2 as process species, and will focus in particular on X2 as the controlled output of the circuit. We can think of Z2 as making measurements of X2, which are then propagated to Z1, which can indirectly affect the production rate of X2 through X1. The use of lower-case letters for the species in (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d) indicates that we are referring to a deterministic quantity (later in the section Noise and Fragility Are Two Sides of the Same Coin we will use upper-case variables to denote random variables).
The structure and content of this article is somewhat unconventional, in that we present both a non-technical treatment of some of the core results in the companion piece, Olsman et al. (2018), and also stand-alone research results that are not discussed in detail elsewhere. The content of Olsman et al. (2018) focuses on the application of classical tools in control theory to study the mathematical properties of the antithetic integral feedback circuit, whereas the goal of this article is to outline practical guidelines that are accessible to an audience interested in utilizing these theoretical results to inform future experimental work in biological engineering.
In Briat et al. (2016), the authors assume that the controller degradation rate γc = 0 in Equations 1c and 1d, which yield the robust precise adaptation mentioned earlier. Our analysis of circuit performance and tradeoffs in the sections Circuit Performance Is Robust to Fast Binding and There Is a Performance Tradeoff between Speed and Robustness makes the same assumption, and we also address the case of γc > 0 in the section Controller Degradation Improves Stability but Introduces Steady-State Error. In the section Antithetic Integral Feedback in a Synthetic Bacterial Growth Control Circuit we apply this analysis to study a particular synthetic growth control circuit that makes use of antithetic integral feedback. Finally, in the section Noise and Fragility Are Two Sides of the Same Coin we will investigate the effects of noise on the circuit.
Results
Circuit Performance Is Robust to Fast Binding
The most obvious parameter to investigate in the antithetic integral feedback system is the binding rate η. To simplify our analysis, we will ignore controller species degradation for the time being (γc = 0 in Equation 1c) and analyze the effect of γc in later sections. Because binding is a bimolecular interaction, η is the rate of association and has units of the form nM−1 h−1. As the binding of Z1 and Z2 at rate η in Equations 1c and 1d encapsulates feedback in the system, we know that η cannot be too small. If binding is sufficiently slow, it will be as if there is no feedback in the circuit at all. More generally, a small η corresponds to a slow feedback action that tends to lead to a large overshoot of the desired steady state, as can be seen in Figure 2A.
Figure 2.
The Effects of Binding Rate on Dynamics
(A) Here we show simulations of the circuit in (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d) with k = θ1 = θ2 = γp = 1 h−1, γc = 0 h−1, and μ = 100 nM h−1. This leads to a value of α2/μ = 10−2 nM−1 h−1. We vary η between 10−3 and 1 nM−1 h−1 and see that for small η the system's response is highly sensitive to the binding rate. Once η is in the regime described by Equation 3, its dynamics are independent of η.
(B) A parametric plot showing this phenomenon, using the overshoot of the desired steady-state as a proxy for system performance. The black curve is generated by varying η between 10−3 and 103 nM−1 h−1, demonstrating that the overshoot becomes almost entirely invariant to η in the blue region where η > 10⋅α2/μ. The colored dots correspond to the parameters of the simulations in (A).
The question then becomes, how large should η be? Briat et al. (2016) in their original work on this system observed that some sets of parameters lead to unstable behavior (which we discuss in greater depth in Section Instability Arises from Production Outpacing Degradation), so it is important to investigate whether or not a large η could ever destabilize the circuit. Ideally, we would like to find a regime of parameters where the system's behavior is easy to predict and we do not have to worry about fine-tuning η. It would be possible to get a sense for the behavior of η by simulating a broad parameter sweep and analyzing the resulting dynamics, whereas we find that it is possible to gain a precise understanding of the effects of η via theoretical analysis (described in detail in Olsman et al., 2018).
Before we can analyze the effects of the binding rate on the circuit, we first need some notion of to what quantity it even makes sense to compare η. If nothing else, this quantity must have the same units as η, which immediately rules out a direct comparison with any other rate parameter in (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d), as η is the only association rate in the system. As μ is the only other parameter that has units of concentration (specifically nM h−1), it must be involved in the comparison. Dimensional analysis can get us as far as noting that a quantity of the form α2/μ, with α taking units of h−1, would at least have same units as η. This is as far as dimensional analysis alone can take us, because the other parameters in the system (, , k, and γp) are all rates that have units consistent with α.
In Olsman et al. (2018), we find that α should take the form
| (Equation 2) |
Although the proof of this result is somewhat technical, we can derive from it a simple guideline for what it means to have η large enough. The aggregate quantity α describes the steady-state rate at which Z2 molecules are produced relative to the concentration of Z1. Characterizing the circuit in terms of α may yield practical benefits, as it allows us to sidestep the problem of individually measuring each of the four rate parameters in Equation 2. We will see in later sections that many important features of the circuit can be written either in terms of the individual parameters in (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d), or in terms of α.
Combining these ideas, we can now state precisely both what it means for η to be large and what effect this has on the circuit. We find that, although the behavior of the circuit is highly dependent on the value of all other parameters in the network, it is insensitive to variations in η, so long as η≫max(α,γp)α/μ. As we will discuss later, most of the interesting questions about stability and performance for the system arise when α is at least comparable in scale to γp, so the relevant relationship between η and the rest of the parameters in the system can be simplified somewhat to
| (Equation 3) |
This Equation characterizes a separation of timescales between the production and degradation dynamics of the system (captured by α and μ) and the antithetic feedback reaction. Intuitively, so long as binding is sufficiently fast, it does not affect the stability and performance of the circuit's output, X2. This is demonstrated in Figures 2A and 2B, where the system's response becomes independent of η when Equation 3 holds. Here we use the amount of relative overshoot of x2 (defined as ) as a proxy for characterizing the circuit's behavior.
By simulating the circuit with α2/μ = 10−2 nM−1 hr−1 as η varies logarithmically between 10−3−103 nM−1 hr−1, we see in Figure 2B that the behavior of the x2 is insensitive to η when it is sufficiently large. The vertical line corresponds to η = 10⋅α2/μ. One consequence of Equation 3 is that the steady-state values of z1 and z2 (denoted by ∗) must have the relationship . If measuring and comparing the rates in Equation 3 is not feasible, it is possible to tell if a circuit described by (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d) is in the regime of strong binding by simply comparing the concentrations of Z1 and Z2 at steady state.
We should emphasize that the circuit can still be functional when Equation 3 does not hold; however, our analytic techniques yield less insight into what should be expected from the circuit in this regime. It is also important to note that Equation 3 is not sufficient to guarantee good performance (a concept into which we will delve more deeply in later sections). It merely implies that, once η is sufficiently large, the qualitative behavior of the X2 (good or bad) will not be affected by varying it. We also note that the dynamics of other species in the network, in particular Z1 and Z2, will be affected by varying η even in the strong binding regime. In the next sections, our analysis will focus on the parameter regime where Equation 3 holds and study how the rest of the rates in (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d) affect the system's dynamics.
Instability Arises from Production Outpacing Degradation
A central question for all feedback systems is whether or not the closed-loop circuit is stable. In many engineering applications, it can be the case that poorly implemented feedback control can destabilize an otherwise stable process (Aström and Murray, 2008). This effect is quite salient in the antithetic integral feedback circuit, which was shown to have unstable oscillatory dynamics (known as limit cycles) for some parameter values (Briat et al., 2016). Before we can begin to consider how well a given set of parameters perform, we first need a guarantee that the corresponding dynamics have the baseline functionality of being stable. Ideally, we would be able to find system-level constraints that determine a priori when the circuit will be stable or unstable. Although there exist straightforward numerical methods to predict whether or not a system with a given set of parameters is stable, it is substantially more difficult to derive general parametric conditions that characterize stability.
Take, for example, the relatively simple circuit described in (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d), which has seven free parameters. Although it is straightforward to numerically simulate the circuit and investigate different parameter regimes, it is not at all obvious at first glance how the circuit will perform for a particular set of parameter values. We find that in the limit of strong binding and no degradation of z1 and z2, there exists a simple relationship that determines stability. Using the same notation as in the section Circuit Performance Is Robust to Fast Binding, we find that the system is stable if and only if α < 2γp (as shown in Figure 3). This says that we need the net production rate in the circuit α to be slower than twice the degradation rate γp. We can rewrite this result, using the fact that , in the form
| (Equation 4) |
Figure 3.
Dynamics Can Be Either Stable or Unstable
Here we demonstrate how Equation 4 affects the dynamics of the circuit in (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d).
(A) This simulation uses k = θ1 = θ2 = γp = 1 h−1, γc = 0 h−1, μ = 100 nM−1 h−1, and η = 10 nM−1 h−1. We see that x2 shows some transient oscillatory behavior but ultimately adapts the steady-state value μ/θ2. As α = 1 h−1 and γp = 1 h−1, Equation 4 holds and the system is stable.
(B) Now we run the simulation with the same parameters, except θ1 = 3 h−1. This implies that α = 3 h−1, which tells us that Equation 4 no longer holds and the system will become unstable. In this case, the instability takes the form of indefinite oscillations. This figure is adapted from one presented in Olsman et al. (2018).
We note that the left-hand term is proportional to the geometric mean of all the production rates in the circuit. This is another perspective from which we can observe that production needs to be, on average, slower than degradation.
An interesting property of this Equation is that μ and η are conspicuously absent. The lack of dependence on η echoes the results we discussed in section Circuit Performance Is Robust to Fast Binding, where we showed that the system's performance is independent of the binding rate in the limit of large η. The fact that the system's stability does not depend on μ makes sense because it is the only production rate in the system with units of concentration, so it must set the concentration scale for the circuit. We should expect that simply changing the units of concentration in the model should not affect stability. It then follows that varying μ is analogous to changing the units of concentration, and consequently should not affect stability and performance. We also find that the system is intrinsically stable (stable for any set of positive parameters) when there is only one process species in Figure 1A, which we prove in Olsman et al. (2018). We find that for the case of n > 2 process species (i.e., the system being controlled consists of species X1, …,Xn) we can derive results analogous to Equation 4, implying that there is a qualitative difference performance between n = 1 species and n ≥ 2 species.
To further investigate Equation 4, we will assume that both the process parameters (γp and k) and the desired set point (determined by μ/θ2) are fixed. As we assumed that we are in the regime where η is large enough to not matter, the only remaining parameter to tune is θ1. We can interpret θ1 as quantifying the strength of interaction between the control species z1 and the process species x1. In this sense, Equation 4 tells us that there is a limit on how strong the connection between the controller and the process can be. It is then natural to ask how varying θ1 affects the circuits performance. To do this, we first need to define the relevant system-level performance metrics.
There Is a Performance Tradeoff between Speed and Robustness
In the previous section we noted that, for a given set of parameters, there is a maximum value that θ1 can take such that the controller remains stable. Picking the rate θ1 to be fast will speed up the response of the circuit; however, if there is variability in θ1 the circuit may inadvertently exceed the limit set by Equation 4 and consequently become unstable. In this context, one way to characterize a circuit's robustness is to determine not only if it is stable but also if there are no small parameter changes that would result in it becoming unstable. A robust circuit is one that is far from instability, whereas a fragile circuit will be very near the stability boundary in Equation 4.
It is important to note that the notions of robustness and sensitivity discussed here equivalently capture both the reference tracking behavior (i.e., the ability of X2 to adapt to μ/θ2) and rejecting disturbances to modeling parameters (Aström and Murray, 2008). Although we phrase most of the results here and in Olsman et al. (2018) in terms of reference tracking, all the statements about reference tracking have equivalent interpretations in terms of disturbance rejection. We only chose to focus on reference tracking as it is an easier behavior to visualize and intuitively understand. Intuitively if the output of a system is capable of tracking a reference, then we can alternatively think of it as being good at minimizing the error between its current state and its desired state. We can also think of this sort of error minimization as the ability to reject disturbances to parameters. In other words, the ability of the output to robustly track some parameters (in this case, μ and θ2) is closely tied to its ability to not track other parameters (i.e., reject disturbances).
These observations yield a tradeoff: in order for the response time of the circuit to be as short as possible θ1 should be fast; however, this necessarily makes the circuit more fragile. This notion of robustness is distinct from robustness of steady state, which is more commonly studied in biological contexts (Barkai and Leibler, 1997, Yi et al., 2000). The steady-state robustness is fairly easy to define, as it is simply the error between the actual steady state of a system and a desired steady state, whereas this notion of fragility requires more sophisticated mathematical tools and is generally difficult to solve for analytically. We discuss these theoretical methods in detail in Olsman et al. (2018), where we show that a good approximation of the fragility of the system has the form
| (Equation 5) |
An equivalent interpretation of is as quantifying the system's worst-case amplification of disturbances. As a sanity check we can see that, when we have equality in Equation 4 (i.e., ), the fragility = ∞ corresponds to the circuit becoming unstable. When , increases monotonically as θ1 increases. If θ1 = 0, then = 1 corresponding to no disturbance amplification (but also no control). We see in Figure 4A that Equation 5 yields a tradeoff curve that concisely captures the relationship between and θ1.
Figure 4.
A Performance Tradeoff between Response Time and Fragility
(A) Here we show a tradeoff curve demonstrating the relationship between response time (parametrized by ) and fragility (as described in Equation 5). An ideal system would have both a fast response and be minimally fragile (i.e., robust), whereas this curve shows that given all other parameters the system only has so much freedom to simultaneously optimize its performance.
(B) These trajectories correspond to parameters associated with the colored dots in (A), showing how fragility and response time relate to the actual dynamics of the circuit. We see that the blue curve rises quickly, but is highly oscillatory and takes a long time to settle. Conversely, the purple curve has a slow rise time, but is quite robust and settles quickly. In each plot we vary θ1 and use k = θ2 = γp = 1 h−1, γc = 0 h−1, μ = 100 nM h−1, and η = 10 nM−1 h−1. This figure is adapted from the one presented in Olsman et al. (2018).
The upshot of this characterization is that we can now precisely quantify the performance tradeoff between speed and robustness. Equation 4 gave us a binary condition for stability, whereas Equation 5 provides a more nuanced measure of the circuit's performance. Figure 4B demonstrates the effects of this tradeoff on the system's dynamics. We see that, as the initial response time of the system decreases (θ1 increases), the system begins to oscillate and takes a much longer time to settle in to its steady-state value. These oscillations are indicative of the system approaching instability, a topic we explore in more depth in Olsman et al. (2018).
Controller Degradation Improves Stability but Introduces Steady-State Error
So far we have neglected the effects of degradation on the control species z1 and z2, assuming that they are only removed from the system by the antithetic binding reaction. Under this assumption, the results of the previous section tell us that the system is, in a sense, fundamentally constrained in its performance. We might hope, then, that nonzero controller degradation (γc > 0) might give us an additional knob to tune in designing antithetic integral feedback circuits.
This turns out to be precisely the case, as is shown in Figure 5. If all other parameters are held constant, the control species degradation rate γc decreases the fragility of the system at the cost of introducing steady-state error. Again assuming the limit of large η, we show in Olsman et al. (2018) that there is a simple expression for this error, which we denote by ɛ:
| (Equation 6) |
We can think of ɛ as capturing the steady-state error of x2 relative to the desired set point we would expect in the absence of controller degradation (μ/θ2). It is useful to think of controller degradation as adding “leakiness” into the feedback mechanism. Imagine a scenario where x2>μ/θ2. Ideally, this would cause an increase in z2 that precisely compensates for the mismatch. This increase in z2 will reduce the amount of z1 via antithetic feedback. If, however, z1 is also degraded, there will be two mechanisms through which z1 is removed, namely, degradation and antithetic feedback. If we imagine a scenario in which γc is extremely large, then every z1 molecule would likely be degraded before having a chance to be sequestered by a z2 molecule. If this were the case, then x2 could not possibly be using feedback to track the set point μ/θ2, because there is effectively no way for the increase in z2 to affect the concentration of z1. In Equation 6, we see that ɛ = 0 if (no error) and ɛ = 1 when (maximum error).
Figure 5.
Controller Degradation Introduces Steady-State Error
(A) We see that controller degradation γc > 0 improves stability at the cost of introducing steady-state error. This tradeoff curve is a parametric plot where γc is varied and Equation 6 is compared with a generalization of Equation 5 that incorporates the effects of γc.
(B) Here we show the effects of the tradeoff in (A) on the dynamics of x2. The parameters are chosen such that, if γc = 0, the system would be unstable. The trajectory with small γc (blue) is stabilized, but still has long-term oscillations indicative of fragility, but has little steady-state error. The trajectory with large γc (purple) is extremely robust, but with large steady-state error. We vary γc and use k = θ2 = γp = 1 h−1, θ1 = 2 h−1, μ = 100 nM h−1, and η = 300 nM−1 h−1.
(C) In this tradeoff curve, we hold the error ɛ = 0.1 constant by varying both θ1 and γc as a constant ratio. We now observe that there is a tradeoff between fragility and leakiness, the latter being parameterized by γc. Intuitively, if γc is large, then many copies of z1 and z2 are being degraded without ever being involved in the feedback process.
(D) We see that simulated trajectories display constant steady-state error, with less oscillatory behavior when γc is large. (C and D) use the same parameters as (A and B), with the exception that θ1 is no longer fixed. This figure is adapted from the one presented in Olsman et al. (2018).
It is also possible to describe the fragility for the case where γc > 0. This expression is much more complex than the one presented in Equation 5, so we refer the reader to our analysis in Olsman et al. (2018) for details. The qualitative behavior of is shown in Figure 5A, where increasing γc decreases fragility. Combining these observations, we see that varying controller degradation introduces a tradeoff between steady-state error and robustness. Increasing γc reduces at the cost of increasing ɛ. Figure 5B demonstrates this effect via simulations of (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d) with increasing values of γc. We see that the trajectory with large controller degradation (purple) is far more stable than the trajectory with small degradation (blue); however, the latter differs substantially from the original set point μ/θ2.
If it were the case that this error were simply a constant offset, then it could, in principle, be corrected for after the fact. The issue, however, is that the error is highly parameter dependent, as can be seen from Equation 6. If the goal is to ensure that the steady-state value of X2 is robust to variation in parameters, this error term may undermine the whole purpose of the circuit. From this perspective, we can think of ɛ as capturing the robustness of steady-state behavior of X2 to parametric variations. Ideally, we would have some way to preserve the increased stability from controller degradation without suffering the consequences of large error. Fortunately, this is possible if we vary not only γp but also the production rate θ1.
Equation 2 tells us that α is proportional to θ1. This means that, if we increase θ1 to match an increase in γc, it would be possible to hold the error in Equation 6 constant. This would not necessarily be very interesting if this increase in θ1 increased by a corresponding amount; however, it turns out that we are still about to decrease while keeping the ratio θ1/γc constant, as seen in Figure 5C. We see in Figure 5D this effect in simulations, where each trajectory has a constant steady-state error of ɛ = 0.1; however, the trajectory with larger γc values are significantly less oscillatory. This tells us that, if high turnover of Z1 and Z2 is not too costly, it is possible to mitigate the downside of degradation while preserving its benefits.
An interesting property of the dynamics in Figure 5D is that the rise time and overshoot of each trajectory is approximately the same. This is likely a by-product of the fact that the values of θ1 in these simulations is large relative to the other production rates in the system. Once θ1 is sufficiently large, the other k and θ2 become the rate-limiting reactions, and marginal increases in θ1 essentially act on a fast timescale that does not have a large effect on the initial transient response of the system. The results of the sections There Is a Performance Tradeoff between Speed and Robustness and Controller Degradation Improves Stability but Introduces Steady-State Error focus on one-dimensional tradeoff curves, with the goal of presenting simple parametric relationships that constraint the possible behavior of the system. This analysis is inherently simplified, as it presents a low-dimensional slice of a high-dimensional space of parameter-system performance relationships. In Baetica et al. (2018) a more general view of these relationships is presented, focusing on surfaces in parameter space and their relationship to system performance.
Antithetic Integral Feedback in a Synthetic Bacterial Growth Control Circuit
The results presented so far have focused on the simple model of antithetic integral feedback presented in Figure 1. This approach has facilitated our development of theoretical results that characterize some of the important features of antithetic feedback as a mechanism for biological control. We will now make use of the insights gained from the simplified model to study a particular synthetic bacterial growth control circuit. We show that the conceptual guidelines developed so far yield practical insight into the design of this circuit. Specifically, we find that the incorporation of controller degradation can lead to dramatically improved performance. The mathematical details of this analysis are presented in Olsman et al. (2018); here will show simulation results and explain at a high level how the theory can help guide circuit design.
A diagram of the circuit architecture is presented in Figure 6A, where growth control is achieved by regulating the production of the toxin CcdB. Conceptually, if the intracellular concentration of toxin is proportional to the total number of cells, then the population as a whole will converge to a steady-state size that is less than the carrying capacity of the environment. The circuit uses a quorum-sensing mechanism (involving the autoinducer N-Acyl homoserine lactone [AHL]) to implement the coupling between population size and CcdB expression. The circuit described so far is capable of constant regulation but lacks an extracellular mechanism through which we can control the population size (assuming that we do not want to be directly tuning protein expression, for example, by altering the strength of ribosome-binding sites).
Figure 6.
A Synthetic Growth Control Circuit
(A) The circuit diagram for the dynamics described in (Equation 7a), (Equation 7b), (Equation 7c), (Equation 7d).
(B) Simulations of the growth control circuit without RNA degradation (solid lines) for various set points μ (dashed lines). This architecture exhibits the precise adaptation property, although the response is relatively slow and oscillatory.
(C) Here we see the same circuit simulated with RNA degradation. The response is much faster and more robust; however, there is nonzero steady-state error for each trajectory.
(D) Here we again simulate the circuit without degradation, but now vary kR. We see qualitatively similar performance tradeoffs to those in Figure 4.
(E) As before, we see that adding controller degradation yields a very fast and consistent response. For these particular parameters, the circuit can achieve this performance with relatively little steady-state error. For all circuits we use the parameters , r = 1 h−1, η = 20 nM−1 h−1, , , , and τ = 4×10−3 nM−1 hr−1. (B) uses and C uses and .
The desired control can be implemented with antithetic feedback. For simplicity we will focus our modeling efforts on the particular mechanism of sense RNA-antisense RNA (asRNA) pairing; however, it would also be feasible to implement feedback with sigma factor-antisigma factor binding or toxin-antitoxin binding. An asRNA is one that has a complimentary sequence to a messenger RNA (mRNA) strand. This complementarity allows an asRNA to hybridize with an mRNA and block translation. By controlling the expression of asRNA, it is possible to modulate how responsive CcdB expression is to changes in AHL concentration and thus control the total size of the population. This architecture was originally proposed for experimental purposes in McCardell et al. (2017), and a functionally similar circuit was tested in Scott et al. (2017). We model this circuit with the following set of differential equations:
| (Equation 7a) |
| (Equation 7b) |
| (Equation 7c) |
| (Equation 7d) |
Quantities of the form [⋅] represent intracellular concentrations for each cell, and N represents the total number of cells. N follows logistic dynamics with an additional death rate due to toxicity τ proportional to the concentration of [CcdB] per cell. [CcdB] is a protein that is toxic to the cell; [mRNA] is the corresponding messenger RNA, the transcription of which we model as being induced by a quorum-sensing ligand that is produced at a rate proportional to N; and [asRNA] is an asRNA that has a complementary sequence to the CcdB mRNA, thus acting as a sequestering partner. The term Ga captures the gain between N and mRNA induction mediated by the quorum-sensing molecule AHL. We can think of [asRNA] and [mRNA] as representing Z1 and Z2, and the quantities [CcdB] and N as representing X1 and X2. This highlights the generality of the modeling framework in Figure 1A: because we did not make any assumptions about the particular nature of the underlying variables, we are able to analyze a circuit with extremely heterogeneous underlying quantities (i.e., RNA, proteins, and cell population).
Figure 6B demonstrates how the growth control circuit adapts to various steady-state population levels when there is no controller degradation (γR = 0). The steady state is set by varying μ. When possible, parameters for this model are taken from You et al. (2004). What is clear across all set points is that the population first grows to carrying capacity before the circuit is activated. Intuitively, the blue curve in Figure 6B has a large amount of asRNA that sequesters mRNA. Because of this, it takes longer to accumulate enough mRNA to make CcdB and lower the population level. In contrast, the purple curve has comparatively little asRNA, effectively increasing the rate at which CcdB can be produced. Qualitatively similar long-term oscillatory behavior in a CcdB-based growth control circuit was observed in Balagaddé et al. (2005).
As You et al. (2004) does not explicitly model transcription, we would ideally pick realistic transcription and translation timescales for bacteria. If we were to naively assume that we could model asRNA and mRNA as if they were directly analogous Z1 and Z2 in the section There Is a Performance Tradeoff between Speed and Robustness, i.e., neglecting controller degradation and assuming they are only removed via strong binding, then we run into an issue. As the antithetic feedback mechanism modeled in the section There Is a Performance Tradeoff between Speed and Robustness assumes that controller degradation is negligible, we must use a very small mRNA synthesis rate to achieve stable dynamics, assuming all other parameters are fixed in a biologically plausible regime, even if binding is fast. This leads to a slow circuit response and a large transient overshoot. This is demonstrated in Figure 6B, where CcdB production is so slow that the population reaches carrying capacity before the circuit can become active. In Figure 6D we see similar dynamics to those in Figure 4, where the circuit faces harsh tradeoffs between speed and robustness.
We see from Figures 6C and 6E that good performance requires not only that RNA is removed via antithetic feedback but also that it is degraded at a nontrivial rate. At the cost of a lack of precise adaptation, these circuits display dramatically improved performance. In Figures 6C and 6E, the transient overshoot from Figures 6B and 6D has almost entirely disappeared, and each system adapts on a nearly identical timescale, independent of μ. Figures 6D and 6E compare performance for various values of kR in each system. Figure 6E shows that the introduction of γR makes the system's dynamics extremely robust to variations in kR over a wide range of values. We can interpret γR as introducing a third tradeoff dimension, namely, steady-state error. By allowing the system flexibility along this axis, its speed and robustness are greatly improved.
Noise and Fragility Are Two Sides of the Same Coin
Our analysis so far has assumed that the underlying circuit is perfectly deterministic, i.e., that its dynamics can be modeled by a system of ordinary differential equations. Although these models serve as a good starting point for studying many biomolecular systems, they do not capture the effects of noise on the system. Although noise is not always an important feature of biological processes, it can sometimes drastically alter the actual behavior of a circuit in a cell (for example, when certain molecules are at a low copy number) (Paulsson, 2005, Lestas et al., 2010).
Here we will examine the steady-state variance of the output species X2 of the antithetic integral feedback system when its dynamics follow a stochastic chemical reaction model and relate it to the performance of the deterministic model (Equation 1a), (Equation 1b), (Equation 1c), (Equation 1d). The capitalization in X2 reflects that it is now a random variable, whereas x2 in Equation 1b is deterministic. To simplify our analysis, we will return to the assumption that there is no controller degradation (γc = 0). In Briat et al. (2016), the authors observed that there exist parameter values such that the deterministic model is unstable, but the average behavior of the stochastic model is stable (i.e., each species, on average, adapts to the desired steady-state concentration). We might think of each stochastic simulation representing a single cell's dynamics, and the average representing the population-level behavior.
We find that, although this result regarding the mean behavior is correct, it does not tell the whole story. In particular we show that, whereas the population average is generally well behaved, the population variance can become large. In other words the population as a whole is predictable; however, there is a large amount of cell-to-cell variability at any given time. In fact, it is the case that the noise scales in approximately the same way as the fragility of the system in Equation 5 (as shown in Figure 7A).
Figure 7.
The Relationship between Noise and Robustness
(A) Here we see a general tradeoff between response time and noise (quantified by the Fano factor) of the antithetic integral feedback network. This is analogous to the tradeoff in Figure 4A. The plot demonstrates the approximate behavior of Equation 8 (black), simulation results for the same approximate model (blue), and simulation results for the fully nonlinear model without approximations (green).
(B) For , the deterministic and stochastic means converge with good performance; individual stochastic trajectories are not very noisy.
(C) For , the deterministic and stochastic mean have damped oscillations; individual stochastic trajectories are noisy.
(D) For , the deterministic model is unstable and oscillates, whereas the stochastic mean is stable, as demonstrated in Briat et al. (2016). We see, however, that the individual trajectories oscillate with randomized phase. In all simulations , η = 10 nM−1 h−1. In A we use μ = 10 nM h−1 to speed up simulations and μ = 100 nM h−1 in the remaining panels. Mean trajectories and standard deviations are computed using N = 1000 trajectories.
Formally, we derive an approximate expression for the steady-state Fano factor (the variance divided by the mean) of X2 in the limit of fast binding:
| (Equation 8) |
We see in Figure 7A that the variability of X2 increases as the deterministic system approaches instability. This illustrates that there is a fundamental tradeoff such that the system can be either fast and noisy or slow and accurate, mirroring the deterministic tradeoff described in Figure 4. We can get a sense for why this happens by observing that the denominator in Equation 8 is the same as the denominator of Equation 5. This tells us that we can expect each expression to grow in the same way as the respective denominators approach 0. Thus there is an intimate connection between the sensitivity of the deterministic antithetic integral feedback system, which corresponds to oscillatory behavior, and the sensitivity of the stochastic antithetic integral feedback system, which corresponds to increased noise. To give a more concrete sense for this relationship, we present representative simulation results that demonstrate this behavior.
In Figure 7B, we see that a slow and robust deterministic performance (in the sense described in the section There Is a Performance Tradeoff between Speed and Robustness) corresponds to a stochastic model with low noise. The left panel shows the mean behavior matching closely to the deterministic trajectory, with a fairly small amount of noise throughout the simulations. The right panel displays some sample individual trajectories, which essentially look like we would expect: closely following the mean with small deviations. If we look at Figure 7C, we see that the deterministic model and the mean of the stochastic model converge to the reference quickly, with damped oscillations. Just as the fragility of the deterministic system is larger in the left panel, we see that the corresponding noise in the stochastic system is much larger than in Figure 7B. Just as speed increased the fragility in the section There Is a Performance Tradeoff between Speed and Robustness, it appears to increase the variability here. We note that these results assume that the antithetic reactions constitute the only feedback in the system; recent work has shown that additional feedback loops can potentially serve to reduce noise (Briat et al., 2018). In the Supplemental Information we comment on the relationship between our results and those presented in Briat et al. (2018).
Finally, Figure 7D demonstrates a parameter regime where the deterministic model becomes unstable. In the left panel we see precisely the type of behavior described by Briat et al. (2016), where the stochastic mean appears to converge despite deterministic instability. The right panel, however, demonstrates that each individual trajectory is in fact exhibiting noisy oscillations, but with phases that are randomized relative to one another. Each individual cell is unstable, but this instability averages out at the population level. This highlights the importance of distinguishing between average and individual behavior.
Discussion
Although we could have, in principle, made some of the qualitative observation presented in this work from simulations alone, it is important to emphasize the fact that these theoretical results not only formalize numerical observations but also force us to state exactly what it is we are measuring. An important contribution from Briat et al. (2016) was not only that the authors proposed a clever mechanism to implement feedback control in biological contexts but also went to great effort to clearly state and prove the existence of the fact that the circuit is capable of achieving what they claimed. Our work here and in Olsman et al. (2018) is an attempt to pursue this line of reasoning and further characterize the qualitative and quantitative behavior of this circuit architecture.
More generally, this theoretical perspective sheds light on a variety of nontrivial parameter relationships, which we hope will allow researchers to avoid the need for brute-force parameter tuning when designing future control circuits. This article was intended to provide a relatively non-technical description of the work, and the interested reader can find a great deal more mathematical depth and generality in Olsman et al. (2018). We believe that this an exciting time for biology, where theory and experiment can productively guide each other toward new and interesting directions of inquiry.
Limitations of Study
An important technical caveat is that the results presented in this article are derived with respect to linearizations of nonlinear circuits. A more technical discussion of the benefits and limitations of this approach is presented in Olsman et al. (2018); however, the general takeaway is that the linearized theory gives a rigorous treatment of how a circuit behaves near its steady-state values, often referred to as local behavior, much in the same way that the derivative describes the local behavior of a function near a particular point.
Because the goal of a control system is to regulate the dynamics of a process, it is often sufficient to understand the behavior of the system in the local neighborhood near the equilibrium to which we would like the system to adapt. However, it is worth noting that the global behavior on nonlinear systems can sometimes be substantially different from that of its linearization. This appears not to be an impediment for the results presented in this article, as the linearized theory does well to predict the qualitative behavior of nonlinear simulations; however, it is important to make explicit the assumptions that underlie our results. The upshot of pursuing a linearized theory is that there is a broad array of theoretical tools that have been developed to analyze linear systems, and they are both more general and easier to use than most results for nonlinear systems. A more general perspective on the role of linearization in systems biology is presented in Malleshaiah and Gunawardena (2016).
Methods
All methods can be found in the accompanying Transparent Methods supplemental file.
Acknowledgments
The authors would like to thank Harry Nunns for providing feedback on the manuscript, Anandh Swaminathan for helping with stochastic simulations, and Reed McCardell for providing insight into the synthetic growth circuit. The project was sponsored by the Defense Advanced Research Projects Agency (Agreement HR0011-17-2-0008). The content of the information does not necessarily reflect the position or the policy of the government, and no official endorsement should be inferred.
Author Contributions
N.O. and F.X. conceived of and performed analysis, wrote the manuscript, and wrote code for simulations and figures. J.C.D. supervised the work, gave feedback on the manuscript, and provided funding. Data S1 is provided, which was used to generate the figures in the paper.
Declaration of Interests
The authors declare no competing interests.
Published: April 26, 2019
Footnotes
Supplemental Information can be found online at https://doi.org/10.1016/j.isci.2019.04.004.
Supplemental Information
References
- Agrawal D.K., Tang X., Westbrook A., Marshall R., Maxwell C.S., Lucks J., Noireaux V., Beisel C.L., Dunlop M.J., Franco E. Mathematical modeling of RNA-based architectures for closed loop control of gene expression. ACS Synth. Biol. 2018;7:1219–1228. doi: 10.1021/acssynbio.8b00040. [DOI] [PubMed] [Google Scholar]
- Aström K.J., Murray R.M. Princeton University Press; 2008. Feedback Systems: An Introduction for Scientists and Engineers. [Google Scholar]
- Baetica A.-A., Leong Y.P., Olsman N., Murray R.M. Design guidelines for sequestration feedback networks. Biorxiv. 2018 https://www.biorxiv.org/content/early/2018/10/30/455493 [Google Scholar]
- Balagaddé F.K., You L., Hansen C.L., Arnold F.H., Quake S.R. Long-term monitoring of bacteria undergoing programmed population control in a microchemostat. Science. 2005;309:137–140. doi: 10.1126/science.1109173. [DOI] [PubMed] [Google Scholar]
- Barkai N., Leibler S. Robustness in simple biochemical networks. Nature. 1997;387:913. doi: 10.1038/43199. [DOI] [PubMed] [Google Scholar]
- Briat C., Gupta A., Khammash M. Antithetic integral feedback ensures robust perfect adaptation in noisy biomolecular networks. Cell Syst. 2016;2:15–26. doi: 10.1016/j.cels.2016.01.004. [DOI] [PubMed] [Google Scholar]
- Briat C., Gupta A., Khammash M. Antithetic proportional-integral feedback for reduced variance and improved control performance of stochastic reaction networks. J. R. Soc. Interface. 2018;15:20180079. doi: 10.1098/rsif.2018.0079. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cardinale S., Arkin A.P. Contextualizing context for synthetic biology–identifying causes of failure of synthetic biological systems. Biotechnol. J. 2012;7:856–866. doi: 10.1002/biot.201200085. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chandra F.A., Buzi G., Doyle J.C. Glycolytic oscillations and limits on robust efficiency. Science. 2011;333:187–192. doi: 10.1126/science.1200705. [DOI] [PubMed] [Google Scholar]
- Cohen-Saidon C., Cohen A.A., Sigal A., Liron Y., Alon U. Dynamics and variability of ERK2 response to EGF in individual living cells. Mol. Cell. 2009;36:885–893. doi: 10.1016/j.molcel.2009.11.025. [DOI] [PubMed] [Google Scholar]
- Del Vecchio D., Murray R.M. Princeton University Press Princeton; 2015. Biomolecular Feedback Systems. [Google Scholar]
- Doyle J.C., Francis B.A., Tannenbaum A.R. Courier Corporation; 2013. Feedback Control Theory. [Google Scholar]
- El-Samad H., Kurata H., Doyle J., Gross C., Khammash M. Surviving heat shock: control strategies for robustness and performance. Proc. Natl. Acad. Sci. U S A. 2005;102:2736–2741. doi: 10.1073/pnas.0403510102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Elowitz M.B., Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature. 2000;403:335–338. doi: 10.1038/35002125. [DOI] [PubMed] [Google Scholar]
- Franco E., Giordano G., Forsberg P.-O., Murray R.M. Negative autoregulation matches production and demand in synthetic transcriptional networks. ACS Synth. Biol. 2014;3:589–599. doi: 10.1021/sb400157z. [DOI] [PubMed] [Google Scholar]
- Goentoro L., Kirschner M.W. Evidence that fold-change, and not absolute level, of β-catenin dictates wnt signaling. Mol. Cell. 2009;36:872–884. doi: 10.1016/j.molcel.2009.11.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hamilton A.J., Baulcombe D.C. A species of small antisense rna in posttranscriptional gene silencing in plants. Science. 1999;286:950–952. doi: 10.1126/science.286.5441.950. [DOI] [PubMed] [Google Scholar]
- Hsiao V., de Los Santos E.L., Whitaker W.R., Dueber J.E., Murray R.M. Design and implementation of a biomolecular concentration tracker. ACS Synth. Biol. 2014;4:150–161. doi: 10.1021/sb500024b. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kampranis S.C., Howells A.J., Maxwell A. The interaction of DNA gyrase with the bacterial toxin CCDB: evidence for the existence of two gyrase-ccdb complexes. J. Mol. Biol. 1999;293:733–744. doi: 10.1006/jmbi.1999.3182. [DOI] [PubMed] [Google Scholar]
- Lestas I., Vinnicombe G., Paulsson J. Fundamental limits on the suppression of molecular fluctuations. Nature. 2010;467:174–178. doi: 10.1038/nature09333. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lillacci G., Aoki S.K., Schweingruber D., Khammash M. A synthetic integral feedback controller for robust tunable regulation in bacteria. Biorxiv. 2017 doi: 10.1038/s41586-019-1321-1. https://www.biorxiv.org/content/early/2017/08/01/170951 [DOI] [PubMed] [Google Scholar]
- Lillacci G., Benenson Y., Khammash M. Synthetic control systems for high performance gene expression in mammalian cells. Nucleic Acids Res. 2018;46:9855–9863. doi: 10.1093/nar/gky795. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Malleshaiah M., Gunawardena J. Cybernetics, redux: an outside-in strategy for unraveling cellular function. Dev. Cell. 2016;36:2–4. doi: 10.1016/j.devcel.2015.12.025. [DOI] [PubMed] [Google Scholar]
- McCardell R.D., Huang S., Green L.N., Murray R.M. Control of bacterial population density with population feedback and molecular sequestration. Biorxiv. 2017 https://www.biorxiv.org/content/early/2017/11/25/225045 [Google Scholar]
- Nevozhay D., Adams R.M., Murphy K.F., Josić K., Balázsi G. Negative autoregulation linearizes the dose–response and suppresses the heterogeneity of gene expression. Proc. Natl. Acad. Sci. U S A. 2009;106:5123–5128. doi: 10.1073/pnas.0809901106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Olsman N., Baetica A.-A., Xiao F., Leong Y.P., Doyle J., Murray R. Hard limits and performance tradeoffs in a class of sequestration feedback systems. Biorxiv. 2018 doi: 10.1016/j.cels.2019.06.001. https://www.biorxiv.org/content/early/2018/09/26/222042 [DOI] [PubMed] [Google Scholar]
- Paszek P., Ryan S., Ashall L., Sillitoe K., Harper C.V., Spiller D.G., Rand D.A., White M.R. Population robustness arising from cellular heterogeneity. Proc. Natl. Acad. Sci. U S A. 2010;107:11644–11649. doi: 10.1073/pnas.0913798107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paulsson J. Summing up the noise in gene networks. Nature. 2004;427:415–418. doi: 10.1038/nature02257. [DOI] [PubMed] [Google Scholar]
- Paulsson J. Models of stochastic gene expression. Phys. Life Rev. 2005;2:157–175. [Google Scholar]
- Potvin-Trottier L., Lord N.D., Vinnicombe G., Paulsson J. Synchronous long-term oscillations in a synthetic gene circuit. Nature. 2016;538:514. doi: 10.1038/nature19841. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Qian Y., Del Vecchio D. Realizing integral control in living cells: how to overcome leaky integration due to dilution? J. R. Soc. Interface. 2018;15 doi: 10.1098/rsif.2017.0902. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Qian Y., McBride C., del Vecchio D. Programming cells to work for us. Annu. Rev. Control Robot Auton. Syst. 2018;1:411–440. [Google Scholar]
- Scott S.R., Din M.O., Bittihn P., Xiong L., Tsimring L.S., Hasty J. A stabilized microbial ecosystem of self-limiting bacteria using synthetic quorum-regulated lysis. Nat. Microbiol. 2017;2:17083. doi: 10.1038/nmicrobiol.2017.83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sun Z.Z., Yeung E., Hayes C.A., Noireaux V., Murray R.M. Linear dna for rapid prototyping of synthetic biological circuits in an escherichia coli based tx-tl cell-free system. ACS Synth. Biol. 2013;3:387–397. doi: 10.1021/sb400131a. [DOI] [PubMed] [Google Scholar]
- Swain P.S., Elowitz M.B., Siggia E.D. Intrinsic and extrinsic contributions to stochasticity in gene expression. Proc. Natl. Acad. Sci. U S A. 2002;99:12795–12800. doi: 10.1073/pnas.162041399. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weber E., Engler C., Gruetzner R., Werner S., Marillonnet S. A modular cloning system for standardized assembly of multigene constructs. PLoS One. 2011;6:e16765. doi: 10.1371/journal.pone.0016765. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yi T.-M., Huang Y., Simon M.I., Doyle J. Robust perfect adaptation in bacterial chemotaxis through integral feedback control. Proc. Natl. Acad. Sci. U S A. 2000;97:4649–4653. doi: 10.1073/pnas.97.9.4649. [DOI] [PMC free article] [PubMed] [Google Scholar]
- You L., Cox R.S., III, Weiss R., Arnold F.H. Programmed population control by cell-cell communication and regulated killing. Nature. 2004;428:868. doi: 10.1038/nature02491. [DOI] [PubMed] [Google Scholar]
- Zhou Y., Gottesman S. Regulation of proteolysis of the stationary-phase sigma factor RpoS. J. Bacteriol. 1998;180:1154–1158. doi: 10.1128/jb.180.5.1154-1158.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.







