Abstract
We present a simple yet effective method, which is based on power series expansion, for computing exact binomial moments that can be in turn used to compute steady-state probability distributions as well as the noise in linear or nonlinear biochemical reaction networks. When the method is applied to representative reaction networks such as the ON-OFF models of gene expression, gene models of promoter progression, gene auto-regulatory models, and common signaling motifs, the exact formulae for computing the intensities of noise in the species of interest or steady-state distributions are analytically given. Interestingly, we find that positive (negative) feedback does not enlarge (reduce) noise as claimed in previous works but has a counter-intuitive effect and that the multi-OFF (or ON) mechanism always attenuates the noise in contrast to the common ON-OFF mechanism and can modulate the noise to the lowest level independently of the mRNA mean. Except for its power in deriving analytical expressions for distributions and noise, our method is programmable and has apparent advantages in reducing computational cost.
I. INTRODUCTION
From the viewpoint of molecular interaction, a biological system can be viewed as a biochemical reaction network. The small number of species molecules participating in the biochemical reactions inevitably gives rise to stochastic fluctuations (or noise) in this species, which would play a significant role in the functioning of this network. It has been shown that the molecular noise might have not only a negative effect on, e.g., the functioning of a synthetic genetic oscillator,1 but also a beneficial effect on, e.g., stochastic focusing in a signaling system.2 In particular, the molecular noise is essential for many cellular functions, and has been identified as a key factor underlying the observed phenotypic variability of genetically identical cells in homogeneous environments.3–5 To quantify contributions of the molecular noise in linear or nonlinear biochemical reaction modules is an important step towards understanding fundamental cellular processes and variations in cell populations.6–24
In general, the molecular noise can be characterized or coarsely by the standard difference that is defined as the root mean square deviation of the number of molecules from the mean or precisely by the noise intensity that is defined as the ratio of the standard deviation over the mean, a commonly used index. If the standard deviation is a significant fraction of the mean or if the noise intensity is significantly large, then the noise must be considered and cannot be neglected when analyzing the dynamical behavior of a biochemical system. This is typically associated with biochemical reaction networks where the mean number of species molecules is rather low. Such situations are remarkable characteristic of gene regulatory networks since the concentrations of regulatory proteins are often so low that there are only a few molecules in a given cellular culture. For example, for the lac operon in E. coli, the lac repressor is active at concentrations of only 10–20 molecules per cell.
The linear noise approximation (LNA) has been verified to be an effective method for analyzing stochastic fluctuations (or noise) in species molecules of interest in some biochemical reaction networks. Regarding this approach, there are many studies and applications. The representative results were given by Paulsson, who applied LNA to analyze sources of noise in simple regulatory modules and derived analytical formulae.15,16 More results were presented in Refs. 6 and 24–29. We point out that LNA is accurate and effective only for linear biochemical networks. However, actual biochemical systems are in general nonlinear, e.g., gene auto-regulatory circuits. For a nonlinear reaction network, LNA fails to capture the stochastic effect in general. Similarly, the spectral analysis method (SAM) proposed by Warren et al.30 is effective for determining the exact noise spectra and related statistic properties for linear biochemical networks, but also fail to capture the stochastic effect in nonlinear networks. This raises a question: how to capture the exact effect of noise in a species of interest in a general biochemical system.
Except for LNA and SAM, the most general description for a biochemical network is based on the master equation that describes the time evolution of the joint probability distribution over copy numbers of all species molecules in this system. Analytically finding such a distribution is a great challenge but some progress has been made by applying approximations to the related master equation.31–33 For example, a wide class of approximations focuses on limits of large concentrations or small switches.15,34,35 More often, modelers resort to stochastic simulation techniques, e.g., the varying-step Monte Carlo method36 and the Gillespie algorithm.37 The former requires a high computational cost (generating many sample trajectories) followed by a difficult statistical task (parameterizing or estimating the relevant probability distribution from which the samples are drawn). The latter is very time-consuming when the number of reactions is large. Recently, there have been significant advances in simulation-based methods to circumvent these shortages.38 Simulation techniques are especially useful for more detailed studies of experimentally well-characterized systems, including those incorporating DNA looping, nonspecific binding39 and explicit spatial effects.40,41 However, it is often beneficial to gain the first intuition of mechanisms behind complicated biochemical networks from simplified analytical models.
This paper aims to present exact results for computing the noise in biochemical reaction networks governed by chemical master equations. We begin with a general theory, which is based on the power series expansion that is effectively used to solve linear or nonlinear differential equations, and then apply this theory to several common biochemical reaction modules including a two-species reaction network, the common ON-OFF model, a gene model with multiple activity states, gene auto-regulatory modules and a signaling motif. For these modules, we derive exact analytical formulae for computing the noise or steady-state probability distributions, some of which have not been given in the existing literature. In particular, when our method is applied to a stochastic gene model with multiple activity states of the promoter (also called the gene model of promoter progression42,43), we find that the multistep mechanisms always reduce transcriptional noise and can regulate the noise to the lowest level independently from the mean expression of mRNA. In addition, we demonstrate that linear methods such as LNA and SAM over- or under-estimate in general the noise in nonlinear biochemical networks. We emphasize that our method is exact and is more easily programmed in contrast to LNA or SAM since our method can transform questions of quantifying noise into those of solving linear algebraic equations where the complexity of biochemical networks under investigation is reflected in the corresponding iterative matrix. In particular, it is powerful in finding steady-state probability distributions in some nonlinear biochemical reaction networks, which have not been given in the literature yet.
II. GENERAL THEORY
Consider a homogeneous culture of N chemical species {X1, X2, …, XN} that undergo M reactions {R1, R2, …, RM} in a closed vessel with a fixed volume and a constant temperature. Let the N-dimensional vector n denote the numbers of species molecules. For each reaction, let ak(n) represent the propensity function of Rk. More precisely, the probability that the kth reaction will occur in the time interval dt is ak(n)dt + o(n), where o(n) satisfies the condition . Let the N-dimensional vector sk be the stoichiometry associated with Rk. The joint probability of n species at time t, denoted by P(n; t), is given by a chemical master equation of the form
| (1) |
subjected to the initial condition P(n0; 0). Note that a component of sk, denoted by skj with 1 ⩽ j ⩽ N, is actually the changing number of Xj-type molecules participating in the kth reaction.
To solve Eq. (1), the standard method is to introduce the generating function G(z; t) for P(n; t), that is
| (2) |
Then, Eq. (1) will become a partial differential equation for G(z; t)
| (3) |
where Lz is a linear differential operator determined by the right side of Eq. (1). Solving Eq. (3) is in general difficult, but we can give the formal expression of its solution
| (4) |
if the corresponding series is convergent, or
| (5) |
if the corresponding series is convergent. The coefficients of distinct derivatives of G(z; t) are polynomial in z, so we can obtain a set of linear differential equations with respect to an(t) or bn(t), which are solvable, by substituting Eq. (4) or (5) into Eq. (3) and comparing coefficients of (z − 1)n of the same powers. If bn(t) is analytically given, then an(t) can be computed according to
| (6) |
Often, finding the steady-state generating function G(z) or the steady-state probability distribution P(n) is the common interest. In this case, substituting the expression Eq. (4) or (5) into Eq. (3) will lead to a set of linear algebraic equations of an (or bn) in an iterative manner (see the below examples). Such an iterative format can determine all coefficients an (or bn) due to the conservative condition of probability that implies a0 = 1. Moreover, by the iterative format we can give the nice analytical expressions of all an (or bn) for some common linear or nonlinear biochemical network modules, which in turn give the analytical expression of G(z) (also see the below examples). We point out: (a) expansion (4) will be used if the corresponding coefficients are more easily computed in contrast to expansion (5), and expansion (5) will be used otherwise. Which expansion will be used and how to choose z0 depend on the coefficient in the term with the highest-order derivative of function G (refer to examples in Sec. III). (b) If the corresponding deterministic system has more than one steady states, this implies coefficients an (or bn) cannot be uniquely determined in spite of the condition a0 = 1. In this case, we should first choose expansion (5) with different z0, which determines different sets of bn, and then compute an according to Eq. (6) for each set of bn. As such, we can obtain more than one steady-state distributions (the details will be discussed elsewhere). In this paper, we only consider the case that an (or bn) has the unique solution.
After having obtained G(z), we can give the analytical expression of the steady-state joint probability according to the formula
| (7) |
In particular, after having obtained the first-order derivative ∂zG(1, 1,…, 1) and the second-order derivative , we easily compute the mean and variance of nj, denoted by ⟨nj⟩ and , respectively, according to
| (8) |
Correspondingly, we can compute the noise intensity for nj (a common index describing noise), denoted by , according to the formula
| (9) |
We emphasize that the formulae (8) and (9) are exact without any approximation.
For clarity and to show advantages of our method, consider the case of one variable. If we Taylor expand the function G(z) at the point z = 1 and assume , then formula (9) of computing the noise intensity will become
| (10) |
The key is how to determine a2 and a1. This can be done by substituting Eq. (4) into the resulting partial differential equation of G(z) and comparing the corresponding coefficients of various same powers (z − 1)j. Note that when solving a2 and a1, we need the conservative condition that gives a0 ≡ 1. We point out that formula (10) is also exact if both a2 and a1 are exactly given. For linear biochemical networks, e.g., for those without feedback, both a1 and a2 can be exactly given (see examples in Sec. III), similar to the LNA case. For nonlinear biochemical networks, e.g., for those with feedback loops, formula (6) is in theory used to compute a1 and a2, which can be exactly given in some cases (also see examples in Sec. III) but would not in other cases, e.g., a gene regulatory network with multimer repression. In the case that a1 and a2 cannot be analytically given, our method is still effective since a1 and a2 can be approximately given by truncating initial finite terms in the related series and reach an arbitrarily pre-given accuracy due to the fact that an → 0 as n → ∞ (see the next paragraph).
Furthermore, we point out that the above coefficient ak (k = 0, 1, 2, …) have apparent meaning. In fact, they represent binomial moments, thus capable of being directly used to compute the probability distribution. To see this, we let X be a discrete random variable that takes nonnegative integer values, and set P{X = n} = P(n)(n = 0, 1, 2, …). Assume that all the binomial moments
| (11) |
exist. Note that the series is uniformly convergent and hence regular in the circle |z| < 1 (where z may be complex). Using the formula for high-order derivatives, we have
| (12) |
Thus, we know
| (13) |
implying that an = Bn for all n. On the other hand, if the function G(z) is expanded as the power series at z = 1, then we have
| (14) |
In particular, putting z = 0 in Eq. (14) yields
| (15) |
Since an tends to zero as n goes to the infinite, the formula (15) can be used to compute approximate distributions reaching any pre-given accuracy by truncating finite terms in the right side of series (15). We point out that for linear biochemical networks, e.g., for those without feedback, P(n) can be exactly given since an can be exactly given, whereas for nonlinear biochemical networks, e.g., for those with feedback loops, P(n) would not but can be approximately given since an → 0 as n → ∞. Note that the formula (15) can also be extended to the multivariate case, that is,
| (16) |
so the above discussion for the single-variable case is also effective for this case.
Finally in this section, we emphasize features and applicability of our power series expansion method. Unlike the extensively used moment-closure method,44–46 which has shortcomings, e.g., it is nonlinear and hence inconvenient to computation; how to approximate high-order moments is unclear; the approximate moments cannot guarantee that a pre-given accuracy is satisfied, our method is linear and hence easier and more convenient to computation. Moreover, it can compute approximate noise and distributions that reach any pre-given accuracies, if an or bn cannot be analytically given. In addition, we point out that for any biochemical reaction network: (1) {an} can be always iteratively computed, so computing the noise strength of any species of interest as well as the joint probability distribution is programmable; (2) our method can reduce computational cost since solving a set of linear algebraic equations for an in general consumes less time than solving a stochastic differential equation described by the master equation.
III. APPLICATIONS
In this section, we present several representative examples to show the power of the above series expansion method. These examples include common gene models (i.e., ON-OFF models), multi-state gene models incorporating chromatin remodeling, gene auto-regulatory models, and some common biochemical reaction network modules in signaling systems, etc. Some of them are linear and the others are nonlinear. By applying our method, we can give the exact analytical expressions for computing noise intensities and even probability distributions. In addition, we adopt the famous Gillespie algorithm37 simulation to verify our theoretical predictions.
A. A two-species reaction network
Consider a simple but generic toy model, which consist of two chemical species X1 and X2, where X1 provides the randomly fluctuating environment for X2, e.g., mRNA fluctuations randomize protein synthesis. Let n1 and n2 be the number of molecules of species X1 and X2. The chemical reactions read
| (17) |
Because n1 affects the rate of X2 but n2 does not affect the rate of X1, this is an example of dynamic disorder.
The joint probability of having n1 and n2 molecules per cell of species X1 and X2 (for instance messenger RNAs and proteins, but interpretations vary with application) at time t is described by a birth-and-death Markov process with events
| (18) |
Now, we introduce the generating function for P(n1, n2; t) and consider the static solution to Eq. (18). Then, Eq. (18) can be converted into a partial differential equation for G(z1, z2).
First, consider the case studied in Ref. 15, that is, , , , . In this case, the static generating function satisfies
| (19) |
where s1 = z1 − 1, s2 = z2 − 1, F(s1, s2) ≡ G(z1, z2). Now, we expand F(s1, s2) as . Note that a00 = 1 from the conservative condition for P(n1, n2). Substituting such an expansion into Eq. (19) and comparing the coefficients of the same powers of s1 or s2 will give
Note that the averages of n1 and n2 are ⟨n1⟩ = a10 = λ1τ1 and ⟨n2⟩ = a01 = λ1λ2τ1τ2, respectively. According to formula (10), we obtain the analytical expression for the n2 noise intensities
| (20) |
which is the same as the previous result obtained by LNA.15
Then, still consider the above example but set . In this case, we have the following partial differential equation for F(s1, s2):
| (21) |
Similar to the above case, we can derive the following exact formula for computing the noise intensity of n2 (denoted by ):
| (22) |
where is given by LNA, that is, . The inequality shows that LNA underestimates the noise. Refer to Fig. 1. This example indicates that LNA is an approximate method.
FIG. 1.
Shown is that LNA gives approximate noise whereas our method gives exact noise. (a) The noise strength in two-species system (17) with is taken as a function of the parameter λ1. The other parameters are λ2 = 10, τ1 = 2, τ2 = 1. (b) The noise strength in the two-species system (17) with is taken as a function of the parameter λ2. The other parameters are λ1 = 1, τ1 = 2, τ2 = 1.
Since all aij can be uniquely determined, deriving analytical expression for the steady-state joint probability distribution P(n1, n2) is possible according to formula (16) for this system. The detail is here omitted due to complexity.
B. Common gene models
The above two-component generic model may be used to describe gene expression with the promoter being always in the active state, but we know that the promoter of a gene may have both one active and one inactive state or several active states or inactive states, depending on chromatin template that accumulates over time until the promoter becomes active. In this section, to show the effectiveness of our method, we shall consider two common gene models and derive analytical expressions for the mRNA noise and even the analytical distribution for the ON-OFF model. In Sec. III C, we shall consider more general gene models, i.e., gene models of promoter progression, and give analytical results.
Case 1: Two-stage gene model
Consider a stochastic model consisting of a gene that transitions randomly between an active state A where transcription of DNA into mRNA molecules (denoted M) is very efficient, and an inactive state I where transcription is not possible. Refer to Fig. 2(a). Assume that the time during which the gene is in the active state A is the “burst” of mRNA synthesis.11,12,18,47,48 The reactions describing this process read
| (23) |
where λ is the rate of gene activation, γ the rate of gene inactivation, μ the rate of transcription when the gene is in the active state, and δ the rate of mRNA decay. We will henceforth use m to denote the number of mRNA molecules.
FIG. 2.

Schematic of gene expression models considering promoter activity (active or inactive): (a) two-stage gene model and (b) three-stage gene model.
Let P0(m) and P1(m) represent the probability of having m mRNA molecules at time t when the gene is at the ON (active) and OFF (inactive) states, respectively. Then, the corresponding discrete master equation reads
| (24) |
where I is the identity operator, E and E−1 are shift operators, i.e., E[f(n)] = f(n + 1) and E−1[f(n)] = f(n − 1) for any function f and any integer n. By introducing generating functions with i = 0, 1 and considering the steady-state solution, we can transform Eq. (24) into the following partial differential equations:
| (25) |
where all the parameters are normalized by the parameter δ, that is, λ/δ → λ, γ/δ → γ, and μ/δ → μ.
To solve Eq. (25), we Taylor expand two functions G0(z) and G1(z) at the point z = 1, that is, with s = z − 1. Let F = F0 + F1. Then, we have with . Note that the conservative condition for probability gives a0(0) = 1. Substituting these expansions into Eq. (25) will yield the following set of algebraic equations:
| (26) |
where n = 1, 2, …. It follows from Eq. (26) that
| (27) |
where (c)n is the Pochhammer symbol, which is defined as (c)n = Γ(c + n)/Γ(c). In particular, we have
| (28) |
Therefore, according to Eq. (10) and by recovering the original parameters, we obtain the explicit expression for the static noise intensity
| (29) |
which is the same as a previous result.18 Furthermore, according to Eq. (15) and using Eq. (27) with recovered parameters, we obtain the analytical expression for the probability distribution of mRNA
| (30) |
where 1F1(α; β; x) is a confluent hypergeometric function.49 Such an expression was before derived under some assumptions of reaction rates47 but is here derived without any assumption.
Case 2: Three-stage gene model
Consider the full model of a gene, which includes three stages: transitions between active and inactive states, transcription from DNA to mRNA, and translation from mRNA to protein. Refer to Fig. 2(b). The corresponding biochemical reactions read11,48,50–54
| (31) |
Let P0(m, n; t) be the probability of having m mRNAs and n proteins when the gene is inactive at time t, and P1(m, n; t) be the probability of having m mRNAs and n proteins when the gene is active at time t. Then, we have two coupled equations
| (32) |
where κ0 = γ/δ1, κ1 = λ/δ1, a = μ0/δ1, b = μ1/δ0, ω = δ0/δ1, τ = t/δ1. If we consider the steady-state case and introduce two generating functions , then Eq. (32) will become two coupled partial differential equations
| (33) |
where s1 = z1 − 1, s2 = z2 − 1 and Fk(s1, s2) ≡ Gk(z1, z2). Denote F ≡ F0 + F1. Now, Taylor expand Fk(s1, s2) and F(s1, s2) at the origin and write and Substituting the expansions of Fk(s1, s2) into Eq. (33), and comparing coefficients of variables of the same powers, we obtain
| (34) |
where i, j = 0, 1, 2, …. The iteration (34) combined with the conservative condition a00 = 1 can uniquely determine all and . In particular, we have
and
To that end, according to Eq. (10), we obtain the analytical expressions for the mRNA and protein noise intensities
| (35) |
and
| (36) |
respectively. These formulae for computing noise intensities are not approximate but exact.
We point out that we have not found the nice expression for the joint distribution in the three-stage model as in the two-stage gene model, but our series method can give an approximate distribution up to any pre-given accuracy by truncating initial finite terms of the expansion . In particular, our method is convenient to numerically computing the distribution.
C. Gene models of promoter progression
Except for transcription and translation, gene expression also involves the recruitment of transcription factors and polymerases, transitioning between active and inactive states of promoter, and chromatin remodeling (CR). In particular, it has been verified that CR is an important factor that influences gene expression. In fact, several recent experimental studies have implicated fluctuations in chromatin state between transcriptionally active and inactive conformations as a major source of cell-to-cell variability in gene expression.55–59 Here, we introduce multi-state stochastic models of gene expression, which extend the previous gene models by incorporating slow dynamics of inactivation or activation. These models assume that the gene activity proceeds sequentially through “ON” states (only here can mature mRNAs be produced) and/or OFF states (here no transcription occurs). Refer to Figs. 3(a) and 3(b), where Fig. 3(a) considers the case that the promoter has one active state and several inactive states (hence the corresponding model is called the multi-OFF model or the multi-OFF mechanism), whereas Fig. 3(b) considers the case that the promoter has one inactive state and several active states (hence the corresponding model is called the multi-ON model or multi-ON mechanism). In these models, the activity states of the promoter form a loop and the transitions between two states may be reversible or irreversible.
FIG. 3.
Multi-state gene expression models and chromatin template-controlled noise in mRNA: (a) Schematic of a multi-inactive-state model of gene expression; (b) schematic of a multi-active-state model of gene expression; (c) the mean noise strength ratio as a function of two experimentally-measureable indices τon and τoff, where both and are obtained from one of 1000 sets of randomly sampling τ1, …, τK with the fixed τon and τoff for the fixed K ≡ L = R = 10 (where L and R represent the number of OFF states and ON states, respectively). The other parameters are μ = 20, δ = 1, K = 10; (d), (e), and (f) shown is the histogram of (red) and (green), where (d), (e) and (f) correspond to points D, E and F labeled in (c), respectively.
Case 1: The multi-OFF mechanism
Assume that the gene activity proceeds sequentially through the ON state, several reversible and irreversible OFF states, and returns to the ON state, forming a loop. Our model considers two main processes: dynamic transitions between active (ON) and inactive (OFF) states of the promoter, denoted by A and I, respectively, and transcription of the DNA sequence into an mRNA molecule (denoted by M). Here, transcription is very efficient in the active state, and is not possible in the inactive state. Moreover, the time spent in the active state is the “burst” of mRNA synthesis.
The corresponding biochemical reactions read
| (37) |
where λ0 ≡ γ is the rate of gene inactivation, λL is the rate of gene activation, λk is the transition rate from the kth OFF state to the (k + 1)th OFF state with k = 1, 2, …, L − 1, λ′k is the transition rate from the (k + 1)th OFF state to the kth OFF state with k = 1, 2, …, L − 1, μ is the synthesis rate of mRNA, and δ is the degradation rate of mRNA. Note that if L = 1, the corresponding model will become our familiar two-state gene model or the telegraph model. In addition, for convenience, we call the multi-OFF process irreversible if all λ′k = 0; reversible if all λ′k ≠ 0; and partially reversible in other cases. Now, we introduce the chemical master equation for mRNA, and henceforth use m to denote the number of mRNAs. The mRNA master equation reads
| (38) |
where we define λ′0 ≡ 0. In Eq. (36), P0 represents the probability of having m mRNA in the ON state at time t, and Pk represent the probability of having m mRNAs in the kth OFF state at time t (1 ⩽ k ⩽ L). The first equation describes how the mRNA is generated in the active state, the subsequent L − 1 equations describe the transitions between inactive state of the promoter, and the final equation describes the transitions between the Lth and (L − 1)th inactive states as well as between the Lth inactive and active states.
For analysis convenience, we consider static solutions only. Introduce generating functions Gk(z) for Pk(m) and for . This will lead to the following differential equations:
| (39) |
where Fj(s) ≡ Gj(z) with s = μ(z − 1) and 0 ⩽ j ⩽ L. Now, we Taylor expand every function Fj(s) by . Note that if we Taylor expand , then with n ⩾ 0. Substituting the expansion of Fj(s) into Eq. (39) yields
| (40) |
where the matrix describes the pattern of transitions between activity states of the promoter, and , n = 0, 1, 2, …. Equation (40) is a set of algebraic equations for , which can be iteratively solved. In fact, note that Ais an M-matrix, so we can express the determinant and the adjacency matrix (nI − A)* as and , respectively, where M1(n) is the polynomial of order L with M1(n) = (n + β1)…(n + βL), and all αk ⩾ 0, βk ⩾ 0 are constants depending on the reaction rates. Thus, it follows from Eq. (40) that
| (41) |
In addition, it follows from Eq. (39) that F(s) = F′0(s), implying that for any n. In other word, we can give the expressions of all cn. Also, we can show according to the conservative condition F(0) = 1 that implies c0 = 1. To that end, according to Eq. (15), we obtain the analytical expression of the steady-state probability distribution
| (42) |
In particular, we can give the explicit expression of c1 and c2. For example, in the irreversible case, i.e., all λ′k = 0, we have
| (43) |
where if k = L − 1, then we define . According to G′(z)|z = 1 = μF′(s)|s = 0 = μc1, G′′(z)|z = 1 = μ2F′′(s)|s = 0 = 2μ2c2 and formula (10), we obtain the analytical expression for the square of the noise intensity in the irreversible case
| (44) |
where , τon = 1/γ, , b = μ/γ, and τk = 1/λk. Recall that τon and τoff represent the total ON and OFF time, respectively, and b represents the mean burst size that characterizes the burst statistics. Note that L = 1 corresponds to the common ON-OFF model of gene expression. Also note that for fixed τon and τoff, the noise intensity ηm monotonically decreases with regard to the number of OFF states (L), implying that an important fact that the multi-OFF mechanism plays a role of attenuating the noise. Such a qualitative conclusion still holds in a general case including reversible and mixing cases.
Except that ηm is monotonic with regard to L, we numerically find that if τon is dominant, then the noise level in the case of the multi-OFF mechanism is always lower than that in the case of the multi-ON mechanism, and in particular, if τon is equal to τoff, then the former is equal to the latter. Refer to Fig. 3(c). For clarity, we also demonstrate probabilities of the mRNA number corresponding to three representative points labeled in Fig. 3(c), referring to Figs. 3(d)–3(f). These numerical results indicate that the ratio of τoff over τon has important influences on the mRNA noise in the case of the multi-OFF mechanism.
Case 2: The multi-ON mechanism
Assume that the gene activity proceeds sequentially through the OFF state, several irreversible ON states (below, only the irreversible case is considered for simplicity but other cases such as partially irreversible or reversible may be similarly treated), and returns to the OFF state, forming a loop. In this case, the corresponding biochemical reactions read
| (45) |
where the meaning of all the symbols is the same as in Case 1. The chemical master equation corresponding to this set of reactions can be expressed as
| (46) |
with γ0 ≡ λ. Introduce generating functions (i = 0, 1, …, R) and new variable s ≡ μ(z − 1), and consider the static solution. Then,
| (47) |
where all the parameters including μ are rescaled by δ. Furthermore, to solve Eq. (47), we introduce the transformation Fi(s) = e−sGi(s). Then, Eq. (47) becomes
| (48) |
Note that Eq. (48) has the form of Eq. (39). Therefore, we can give the expression of the sum function F ≡ F0 + F1 + … + FR. Furthermore, we can obtain the analytical expression for the steady-state probability distribution of mRNA. That is,
| (49) |
where all αi and βi are constants depending only on the reaction rates.
In particular, we can obtain the analytical mean and variance of mRNA in the irreversible
| (50) |
where , and . Furthermore, according to formula (10), we obtain the analytical expression for the square of the noise intensity in the irreversible case
| (51) |
which is a monotonically decreasing function of R if , and are fixed. This implies that the multi-ON mechanism also always reduces noise like the multi-OFF mechanism. In particular, when R = 1 that corresponds to the common ON-OFF model (denote by ηon − off the noise intensity), reaches the maximum. Moreover, . In addition, when , reaches its minimum. Moreover, we have
| (52) |
Some numerical results are shown in Figs. 3(c)–3(f). We observe that if τon is dominant over τoff, then the noise level in the case of the multi-ON mechanism is always higher than that in the case of the multi-OFF mechanism, and in particular, if τon = τoff, then the mean noise level is the same in two cases, independently from the detailed transitions between activity states of the promoter. Refer to Fig. 3(c). For clarity, we also demonstrate the histograms of the mRNA number corresponding to three representative points labeled in Fig. 3(c), referring to Figs. 3(d)–3(f). These numerical results indicate that the ratio of τon over τoff has also important influences on the mRNA noise in the case of the multi-ON mechanism as the case of the multi-OFF mechanism.
In addition, combining analysis of the above two cases, we obtain low boundaries of the mRNA noise intensity in the common ON-OFF model as
| (53) |
Case 3: Other mechanisms
In general, the pattern of transitions between active and inactive states of the promoter is complex, in particular in eukaryotic cells. Here, we analyze an example only, which consider one inactive state and three active states of the gene promoter but there exist transitions between inactive and active states or between active states, referring to Fig. 4. The corresponding master equations are described as follows:
| (54) |
FIG. 4.
The effect of multi-state mechanism on the noise in gene expression. (a) Schematic of multi-inactive-active-state model of gene expression. (b)–(d) The noise strength as a function of the parameter k ≡ λ4 = λ′4 = λ5 = λ′5 = λ6 = λ′6. The other parameter values used in computation are λ1 = 2, λ2 = 1, λ3 = 1, γ2 = 1, γ3 = 1, μ = 5 and (b) γ1 = 1, (c) γ1 = 2, (d) γ1 = 1.7.
To solve Eq. (54), we introduce generating functions (i = 0, 1, 2, 3) and new variable s ≡ μ(z − 1), and consider the steady-state case. This will lead to a following set of coupled ordinary differential equations:
| (55) |
where all the parameters including μ are rescaled by δ. Furthermore, for Eq. (55), we introduce the transformation Fi(s) = e−sGi(s), which transform Eq. (55) into the following:
| (56) |
Summing the above four equations gives F0(s) = −F′(s), where F ≡ F0 + F1 + F2 + F3. Now, we Taylor expand every function Fj(s) by . Note that if we Taylor expand , then with n ⩾ 0. Substituting such expansions into Eq. (56) yields
| (57) |
where , n = 0, 1, 2, and due to F(0) = 1. For convenience, introduce the following matrixes:
Then, according to F0(s) = −F′(s), we know
| (58) |
Using Eq. (57) in the case of n = 1 and again according to F0(s) = −F′(s), we obtain
| (59) |
Note that
| (60) |
Thus, we obtain the analytical expression of noise intensity
| (61) |
where c1 and c2 are given by Eqs. (58) and (59), respectively. We point out that Eq. (61) can reproduce the result obtained in the above multi-on mechanism with R = 3 if some rate constants related to transitions between active and inactive states are set as zero. Especially, we find that if λ1 = λ2 = λ3 ≡ λ, then add . This indicates that the mean and the noise level are independent of the reversible transitions between active states, i.e., they do not rely on parameters λ′4, λ′5, λ′6. In addition, the analytical distribution can be also given using Eq. (15) and computing Eq. (57), and can be expressed by a confluent hypergeometric function. The detail is omitted here.
Now, we perform numerical simulations. For clarity, we consider a particular case: . In this case, Figs. 4(c)–4(e) show three representative patterns of the noise intensity square () vs k, where Fig. 4(c) shows that monotonically increases with regard to k with fixed γ1 = 1, Fig. 4(d) shows that monotonically decreases with regard to k with fixed γ1 = 2, and Fig. 4(e) shows that has one maximum with regard to k with fixed γ1 = 1.7.
This example indicates that transition patterns between activity states of the promoter are a factor that can influence the noise level.
D. Gene auto-regulatory models
In this subsection, we consider two common gene auto-regulatory models: the auto-activation model and the auto-repression model, which are nonlinear biochemical networks, and derive analytical expressions for gene expression noise. Regarding self-regulatory systems, there have been some analytical results, e.g., seeing Refs. 60 and 61.
Case 1: Auto-activation
This model is described by the following biochemical reactions with the protein switching the gene from the OFF-state to the ON-state or vice versa
| (62) |
Denote the protein probability distributions in the I and A states as P0(m; t) and P1(m; t), respectively. Then, the corresponding master equations read62
| (63) |
Define generating functions and consider static solutions. Then, it follows from Eq. (63) that
| (64) |
Let G = G0 + G1. Then, we have
| (65) |
where G satisfies the normalization condition G(1) = 1 due to the conservative condition .
If we Taylor expand G(z) at THE POINT z = 1, then the coefficients in the corresponding series cannot be analytically given. However, if we Taylor expand G(z) at THE POINT z = δ/(a + δ) due to the characteristic of the coefficient in the second order derivative of G(z) in Eq. (65), then the corresponding coefficients can be explicitly given. In fact, substituting the expansion
| (66) |
into Eq. (65) and comparing the coefficients of various same powers of z − δ/(a + δ) will yield
| (67) |
Thus, we obtain the analytical expression for generating function G(z)
| (68) |
where the condition G(1) = 1 gives . Furthermore, we obtain the analytical expression for the steady-state distribution
| (69) |
In addition, we can give the analytical formula for mean and variance according to ⟨m⟩ = G′(1) and . Moreover, the formula can show that when a → 0, the analytical expression for the noise intensity without feedback is
| (70) |
which is the same as the known results.18 In addition, by computation we know
Using the property on derivatives of confluent hypergeometric functions, we can show that the noise intensity computed by ηm = σm/⟨m⟩ is not a monotonically increasing but monotonically decreasing function in a (referring to Fig. 5(c)). This indicates that the traditional conclusion that positive feedback enlarges noise15 is not correct for the circuit described by biochemical reactions (60).
FIG. 5.
The effect of feedback mechanisms on the noise in gene expression. (a) Schematic of auto-activation model of gene expression. (b) Schematic of auto-repression model of gene expression. (c) The noise strength as a function of the positive feedback strength (a) for system (62), where the parameters are λ = γ = 1, μ = 5, δ = 0.1. (d) The noise strength as a function of the negative feedback strength (r) for system (71), where the parameters are λ = γ = 1, μ = 5, δ = 0.1.
Case 2: Auto-repression
Consider the following biochemical reactions, which describe the auto-repression model of gene expression:
| (71) |
Let P0(m; t) and P1(m; t) be the probability density function of having m proteins at the t time in the I and A states, respectively. Then, the corresponding master equations read
| (72) |
Introduce two generating functions and the total generating function G = G0 + G1, and consider the static solutions to Eq. (72). Similar to the case of auto activation, we can obtain the analytical expression for G(z), that is
| (73) |
Furthermore, we can obtain the analytical expression for the steady-state probability distribution
| (74) |
In addition, according to the formulae ⟨m⟩ = G′(1) and , we can compute the mean and variance and further the noise intensity by ηm = σm/⟨m⟩. Note that if r → 0, then we obtain the same analytical expression for the noise intensity as Eq. (70) in the case of no feedback.
Again using the property on derivatives of hypergeometric functions, we can show that the noise intensity computed by ηm = σm/⟨m⟩ is not a monotonically decreasing but monotonically increasing function in r (referring to Fig. 5(d)). This indicates that the traditional conclusion that negative feedback reduces noise15 is not correct for the circuit described by biochemical reactions (71).
Our numerical results show that feedback has a counter-intuitive effect (referring to Figs. 5(c) and 5(d)). The similar conclusion was also obtained in Ref. 6, wherein more numerical results were demonstrated.
E. A signaling system with autocatalytic kinase
Here we consider a simple enzymatic futile cycle, which is schematically shown in Fig. 6(a). This mechanism is ubiquitous throughout biological systems and is encountered as a recurrent control motif in such diverse regulatory processes as metabolism, GTPase cycles, mitogen-activated protein kinase (MAPK) cascades, glucose mobilization, cell division/apoptosis, checkpoint control, actin treadmilling, and membrane transport as well as two-component systems and phosphorelays in microbial stress-response signaling pathways (refer to Refs. 63 and 64 and references therein). All the above enzymatic futile cycle examples can be summarized into the following simple biochemical kinetic system
| (75) |
where the reaction scheme for PdPC with autocatalytic kinase depicts the molecular elements E, E*, and ATP combining in the forward reaction at a rate of k′1 to produce 2E* and ADP. The backward reaction occurs at a rate of k′−1 and is autocatalytic because E* serves as a catalyst for itself. The second reaction can be interpreted in a similar manner.
FIG. 6.
(a) Schematic of the enzymatic futile cycle reaction mechanism. (b) The noise strength as a function of the total number NT, showing that there is an extreme, where the parameters are k1 = 1, k−1 = 1, k2 = 10, k−2 = 0.01.
To avoid cluttering, we let ATP, ADP, and Pi be absorbed into the rate constants. Thus, the above reactions can be rewritten as
| (76) |
which satisfy the conservative condition [E] + [E*] = ET (the total concentration, which is assumed as a constant). Let n represent the number of E* molecules, NT represent the total number of kinase molecules (a constant), and P(n; t) represent the probability of having n molecules at time t. Then, the corresponding master equation reads
| (77) |
By introducing the generating function for P(n; t), we have a partial differential equation for G(z; t)
| (78) |
Consider the static solution to Eq. (78). Now, we Taylor expand G(z) as . Substituting this expansion into the above equation and comparing the coefficients of (z − 1) of the zero-order and the first-order powers lead to
where C is a normalized constant, satisfying . According to the formula (10), we obtain the analytical expression for the noise intensity
| (79) |
In addition, we can obtain the following analytical steady-state distribution
| (80) |
The similar expression was also derived in Ref. 64.
Interestingly, we numerically find that there exists an optimal total number of kinase molecules, NT, such that the noise reaches the highest level, referring to Fig. 6(b). In addition, we find that the total number of kinase molecules has important influences on the generation of bistabiliy. More results for this system will be published elsewhere.
IV. CONCLUSION AND DISCUSSION
Molecular noise in biochemical systems or networks is inevitable due to the low copy numbers of species molecules participating in the reactions. Here, we have proposed a general yet exact method for computing the noise and probability function, which can be applied to arbitrarily linear or nonlinear biochemical reaction networks. By examples, we have shown the strong power of such a power series expansion method, which is particularly effective for linear biochemical systems since an in the related series expansion of the generating function G(z) is iteratively computed and hence can be analytically given. As such, the noise strength for any species of interest and the joint probability can be explicitly given. For nonlinear biochemical networks, an is also iteratively computed in all cases, and can be analytically given in some cases and cannot in other cases, but can be approximately given and the approximate results can reach any pre-given accuracy. In addition, we have given exact analytical formulae for the noise intensities in several common biochemical networks by applying this method. We have shown that LNA is effective only in linear network cases but would overestimate or underestimate the noise in nonlinear network cases. Moreover, positive (negative) feedback does not always enlarge (reduce) noise but has a counter effect, implying that the previous conclusion on the effect of feedback on noise needs to be modified. In addition, interesting analytical distributions for two common nonlinear biochemical modules have also been given for the first time.
We point out that our method is linear, and hence has many advantages in contrast to the following several existing methods: (1) The moment-closure method44–46,62 is in general nonlinear since low-order moments depend often on higher-order moments, implying that nonlinear approximations are required. Moreover, how to set an approximate scheme does not have a general rule but depends on specific biochemical networks, and the common setting of approximate schemes do not guarantee that a pre-given accuracy is satisfied since high-order moments do not tend to zero as its order goes to the infinity. Our method can overcome these shortcomings mainly because our method is linear and the related series is convergent but how to choose points at which power series are expanded is worth being further studied. (2) LNA6,24–29 can only be used to obtain low-order approximation to noise, while our method can be used to numerically compute arbitrarily high-order approximations for noise and probability distributions and the resulting approximations can reach arbitrary accuracies. (3) Although linear, the variational method proposed in Ref. 32 would meet difficulties in handling high-dimensional systems and only give approximate results. In contrast, our method, which is also linear, can be applied to an arbitrarily dimensional linear or nonlinear biochemical network for which we need only to solve a set of linear algebraic equations.
Our method can also be used to study noise propagation in signaling systems where analytical solutions to the corresponding master equations may not exist, and can give exact rather than approximate results for the propagating noise in the signaling systems. Moreover, the corresponding method is programmable since our analytical formulae for computing noise strengths depend only on the first and second order derivatives of generating functions at a certain point on the argument. Our method can even be used to study noise propagation in signaling networks in terms of probability distributions.
Owing to the exact formulae available to computing noise, our next task is to elucidate roles and functions of noise in controlling stochastic fluctuations of biochemical network modules, e.g., gene auto-regulatory motifs, feedback forward loops, cis-regulatory modules, etc. As such, we can reckon that some previous results on noise obtained by LNA would require to be modified. In addition, by applying the power series expansion method proposed here, it is possible to derive analytical probability distributions described by biochemical master equations in more general biochemical reaction networks. This is significant since a distribution in general contains more accurate random information than noise intensity, a commonly used index.
ACKNOWLEDGMENTS
This work was partially supported by Grant Nos. 91230204 (T.Z.), 30973980 (T.Z.), 11005162 (J.Z.), 2010CB945400 (T.Z.), 10451027501005652 (J.Z.), 20100171120039 (J.Z.), 2012J2200017 (J.Z.), P50GM76516 (Q.N.), R01GM67247 (Q.N.), and DMS-1161621 (Q.N.).
REFERENCES
- 1.Elowitz M. B. and Leibler S., Nature (London) 403, 335 (2000). 10.1038/35002125 [DOI] [PubMed] [Google Scholar]
- 2.Paulsson J., Berg O. G., and Ehrenberg M., Proc. Natl. Acad. Sci. U.S.A. 97, 7148 (2000). 10.1073/pnas.110057697 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Boettiger A. N. and Levine M., Science 325, 471 (2009). 10.1126/science.1173976 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Raj A.et al. , Nature (London) 463, 913 (2010). 10.1038/nature08781 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Eldar A. and Elowitz M. B., Nature (London) 467, 167 (2010). 10.1038/nature09326 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Pedraza J. M. and Paulsson J., Science 319, 339 (2008). 10.1126/science.1144331 [DOI] [PubMed] [Google Scholar]
- 7.Sanchez A. and Kondev J., Proc. Natl. Acad. Sci. U.S.A. 105, 5081 (2008). 10.1073/pnas.0707904105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Sanchez A.et al. , PLOS Comput. Biol. 7, e1001100 (2011). 10.1371/journal.pcbi.1001100 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Berg O. G., J. Theor. Biol. 71, 587 (1978). 10.1016/0022-5193(78)90326-0 [DOI] [PubMed] [Google Scholar]
- 10.Peccoud J. and Ycart B., Theor. Popul. Biol. 48, 222 (1995). 10.1006/tpbi.1995.1027 [DOI] [Google Scholar]
- 11.Friedman N., Cai L., and Xie X. S., Phys. Rev. Lett. 97, 168302 (2006). 10.1103/PhysRevLett.97.168302 [DOI] [PubMed] [Google Scholar]
- 12.Shahrezaei V. and Swain P. S., Proc. Natl. Acad. Sci. U.S.A. 105, 17256 (2008). 10.1073/pnas.0803850105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Jia T. and Kulkarni R. V., Phys. Rev. Lett. 106, 058102 (2011). 10.1103/PhysRevLett.106.058102 [DOI] [PubMed] [Google Scholar]
- 14.McAdams H. H. and Arkin A., Proc. Natl. Acad. Sci. U.S.A. 94, 814 (1997). 10.1073/pnas.94.3.814 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Paulsson J., Nature (London) 427, 415 (2004). 10.1038/nature02257 [DOI] [PubMed] [Google Scholar]
- 16.Paulsson J., Phy. Life Rev. 2, 157 (2005). 10.1016/j.plrev.2005.03.003 [DOI] [Google Scholar]
- 17.Thattai M. and van Oudenaarden A., Proc. Natl. Acad. Sci. U.S.A. 98, 8614 (2001). 10.1073/pnas.151588598 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Kepler T. B. and Elston T. C., Biophys. J. 81, 3116 (2001). 10.1016/S0006-3495(01)75949-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Ozbudak E. M.et al. , Nat. Genet. 31, 69 (2002). 10.1038/ng869 [DOI] [PubMed] [Google Scholar]
- 20.Raser J. M. and O’Shea E. K., Science 309, 2010 (2005). 10.1126/science.1105891 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Dobrzynski M. and Bruggeman F. J., Proc. Natl. Acad. Sci. U.S.A. 106, 2583 (2009). 10.1073/pnas.0803507106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Rinott R., Jaimovich A., and Friedman N., Proc. Natl. Acad. Sci. U.S.A. 108, 6329 (2011). 10.1073/pnas.1013148108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Huh D. and Paulsson J., Nat. Genet. 43, 95 (2011). 10.1038/ng.729 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Hornung G., and Barkai N., PLOS Comput. Biol. 4, e8 (2008). 10.1371/journal.pcbi.0040008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Hooshangi S., Thiberge S., and Weiss R., Proc. Natl. Acad. Sci. U.S.A. 102, 3581 (2005). 10.1073/pnas.0408507102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Van Kampen N. G., Stochastic Processes in Physics and Chemistry (Elsevier, Amsterdam, 2007). [Google Scholar]
- 27.Scott M., Hwa T., and Ingalls B., Proc. Natl. Acad. Sci. U.S.A. 104, 7402 (2007). 10.1073/pnas.0610468104 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Elf J. and Ehrenberg M., Genome Res. 13, 2475 (2003). 10.1101/gr.1196503 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Zhang J. J., Yuan Z. J., and Zhou T., Phys. Biol. 6, 046009 (2009). 10.1088/1478-3975/6/4/046009 [DOI] [PubMed] [Google Scholar]
- 30.Warren P. B., Tanase-Nicola S., and ten Wolde P. R., J. Chem. Phys. 125, 144904 (2006). 10.1063/1.2356472 [DOI] [PubMed] [Google Scholar]
- 31.Lan Y. H. and Papoian G. A., J. Chem. Phys. 125, 154901 (2006). 10.1063/1.2358342 [DOI] [PubMed] [Google Scholar]
- 32.Lan Y. H., Wolynes P. G., and Papoian G. A., J. Chem. Phys. 125, 124106 (2006). 10.1063/1.2353835 [DOI] [PubMed] [Google Scholar]
- 33.Lan Y. H. and Papoian G. A., Phys. Rev. Lett. 98, 228301 (2007). 10.1103/PhysRevLett.98.228301 [DOI] [PubMed] [Google Scholar]
- 34.Tanase-Nicola S., Warren P. B., and ten Wolde P. R., Phys. Rev. Lett. 97, 068102 (2006). 10.1103/PhysRevLett.97.068102 [DOI] [PubMed] [Google Scholar]
- 35.Walczak A. M., Sasai M., and Wolynes P. G., Biophys. J. 88, 828 (2005). 10.1529/biophysj.104.050666 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Bortz A. B., Kalos M. H., and Lebowitz J. L., J. Comput. Phys. 17, 10 (1975). 10.1016/0021-9991(75)90060-1 [DOI] [Google Scholar]
- 37.Gillespie D. T., J. Phys. Chem. 81, 2340 (1977). 10.1021/j100540a008 [DOI] [Google Scholar]
- 38.Gillespie D. T., Annu. Rev. Phys. Chem. 58, 35 (2007). 10.1146/annurev.physchem.58.032806.104637 [DOI] [PubMed] [Google Scholar]
- 39.Walczak A. M., Mugler A., and Wiggins C. H., Proc. Natl. Acad. Sci. U.S.A. 106, 6529 (2009). 10.1073/pnas.0811999106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.van Zon J. S.et al. , Biophys. J. 91, 4350 (2006). 10.1529/biophysj.106.086157 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.van Zon J. S. and ten Wolde P. R., J. Chem. Phys. 123, 234910 (2005). 10.1063/1.2137716 [DOI] [PubMed] [Google Scholar]
- 42.Zhang J. J., Chen L. N., and Zhou T. S., Biophys. J. 102, 1247 (2012). 10.1016/j.bpj.2012.02.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Zhou T. S. and Zhang J. J., SIAM J. Appl. Math. 72, 789 (2012). 10.1137/110852887 [DOI] [Google Scholar]
- 44.Grima R., J. Chem. Phys. 136, 154105 (2012). 10.1063/1.3702848 [DOI] [PubMed] [Google Scholar]
- 45.Zechner C.et al. , Proc. Natl. Acad. Sci. U.S.A. 109, 8340 (2012). 10.1073/pnas.1200161109 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Gillespie C. S., Syst. Biol. 3, 52–58 (2009). 10.1049/iet-syb:20070031 [DOI] [PubMed] [Google Scholar]
- 47.Raj A.et al. , PLoS Biol. 4, e309 (2006). 10.1371/journal.pbio.0040309 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Paulsson J. and Ehrenberg M., Phys. Rev. Lett. 84, 5447 (2000). 10.1103/PhysRevLett.84.5447 [DOI] [PubMed] [Google Scholar]
- 49.Slater L. J., Confluent Hypergeometric Functions (Cambridge University Press, Cambridge, 1960). [Google Scholar]
- 50.Karmakar R. and Bose I., Phys. Biol. 1, 197 (2004). 10.1088/1478-3967/1/4/001 [DOI] [PubMed] [Google Scholar]
- 51.Iyer-Biswas S., Hayot F., and Jayaprakash C., Phys. Rev. E 79, 031911 (2009). 10.1103/PhysRevE.79.031911 [DOI] [PubMed] [Google Scholar]
- 52.Mugler A., Walczak A. M., and Wiggins C. H., Phys. Rev. E 80, 041921 (2009). 10.1103/PhysRevE.80.041921 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Mackey M. C., Tyran-Kaminska M., and Yvinec R., J. Theor. Biol. 274, 84 (2011). 10.1016/j.jtbi.2011.01.020 [DOI] [PubMed] [Google Scholar]
- 54.Assaf M., Roberts E., and Luthey-Schulten Z., Phys. Rev. Lett. 106, 248102 (2011). 10.1103/PhysRevLett.106.248102 [DOI] [PubMed] [Google Scholar]
- 55.Suter D. M.et al. , Science 332, 472 (2011). 10.1126/science.1198817 [DOI] [PubMed] [Google Scholar]
- 56.Harper C. V.et al. , PLoS Biol. 9, e1000607 (2011). 10.1371/journal.pbio.1000607 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Mao C.et al. , Mol. Syst. Biol. 6, 431 (2010). 10.1038/msb.2010.83 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Mariani L.et al. , Mol. Syst. Biol. 6, 359 (2010). 10.1038/msb.2010.13 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Miller-Jensen K.et al. , Trends Biotechnol. 29, 517 (2011). 10.1016/j.tibtech.2011.05.004 [DOI] [PubMed] [Google Scholar]
- 60.Hornos J. E. M.et al. , Phys. Rev. E. 72, 051907 (2005). 10.1103/PhysRevE.72.051907 [DOI] [PubMed] [Google Scholar]
- 61.Feng H. D., Han B., and Wang J., J. Phys. Chem. B 115, 1254 (2011). 10.1021/jp109036y [DOI] [PubMed] [Google Scholar]
- 62.Singh A. and Hespanha J. P., Biophys. J. 96, 4013 (2009). 10.1016/j.bpj.2009.02.064 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Samoilov M., Plyasunov S., and Arkin A. P., Proc. Natl. Acad. Sci. U.S.A. 102, 2310 (2005). 10.1073/pnas.0406841102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Bishop L. M. and Qian H., Biophys. J. 98, 1 (2010). 10.1016/j.bpj.2009.09.055 [DOI] [PMC free article] [PubMed] [Google Scholar]





