Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Sep 5.
Published in final edited form as: Phys Rev E Stat Nonlin Soft Matter Phys. 2014 Sep 5;90(3):032702. doi: 10.1103/PhysRevE.90.032702

How input noise limits biochemical sensing in ultrasensitive systems

Bo Hu 1, Wouter-Jan Rappel 2, Herbert Levine 3
PMCID: PMC4457451  NIHMSID: NIHMS694670  PMID: 25314468

Abstract

Many biological processes are regulated by molecular devices that respond in an ultrasensitive fashion to upstream signals. An important question is whether such ultrasensitivity improves or limits its ability to read out the (noisy) input stimuli. Here, we develop a simple model to study the statistical properties of ultrasensitive signaling systems. We demonstrate that the output sensory noise is always bounded, in contrast to earlier theories using the small noise approximation, which tends to overestimate the impact of noise in ultrasensitive pathways. Our analysis also shows that the apparent sensitivity of the system is ultimately constrained by the input signal-to-noise ratio. Thus, ultrasensitivity can improve the precision of biochemical sensing only to a finite extent. This corresponds to a new limit for ultrasensitive signaling systems, which is strictly tighter than the Berg-Purcell limit.

I. INTRODUCTION

A wide variety of biological processes are controlled by switchlike sensors that are highly sensitive to specific stimuli. For example, Escherichia coli chemotaxis is driven by multiple flagellar motors, which spin clockwise or counterclockwise under the regulation of CheY-P. Recent experiments revealed that bacterial motors exhibit an ultrasensitive response (with a Hill coefficient of ~10) to CheY-P concentrations [1]. Another example is the mitogen-activated protein kinase (MAPK) cascade, a well-conserved signaling module controlling cell fate decisions [2,3]. For instance, the MAPK pathway in Xenopus oocytes converts the concentration of specific hormones into an all-or-none response (oocyte maturation), with a Hill coefficient of at least 35 as estimated in Ref. [3]. Obviously, this ultrasensitivity allows small changes in the input cues to induce dramatic functional effects. As biochemical signals often fluctuate over time due to inherent stochasticity, signaling noise poses a limit to the capacity of concentration sensing. Does ultrasensitivity help the system to read out the input signal? Or does it amplify the input noise to the extent that it corrupts the precision of concentration measurement? What are the general constraints for biochemical sensing? These are the key questions we attempt to address here.

There has been significant interest to understand how signaling noise limits the accuracy of biochemical sensing [416]. In 1977, Berg and Purcell argued that the physical limit to concentration measurements is set by the dynamics of their random arrival at target locations [4]: For a single sensor of linear size a, the precision of concentration measurements is δc/c~1/DacT, where c is the concentration of the molecules interacting with the sensor, D is the diffusion constant of the molecules, and T is the measurement time. The Berg-Purcell limit was later generalized to an array of sensors [5,6] and the precision of biochemical sensing was again found to be limited by the molecular counting noise, independent of the number or the sensitivity of sensors. More recent studies have extended the problem of concentration sensing to more sophisticated tasks such as spatial and temporal gradient sensing [812] and have explored possible mechanisms that beat the Berg-Purcell limit [14,15].

The interplay between ultrasensitivity and noise is intriguing, as small variations in the input may cause large output differences. However, the nonlinearity of ultrasensitive systems makes theoretical progress difficult. Previous studies usually assume that the fluctuation is small such that one can linearize the input noise in the chemical Langevin equation [1719] or in the fluctuation-dissipation analysis [20]. This small noise approximation allows for analytical treatment but may not correctly capture the impact of noise or the sensing capacity of ultrasensitive systems. In this paper, we present a simple model consisting of multiple ultrasensitive sensors that measure a (noisy) input signal. We explicitly derive the upper and lower bounds for the output sensory noise. In contrast to the additive noise rule derived earlier [17,18,20], our result shows that the output noise is strictly bounded. We further show that the apparent sensitivity of the system is also constrained by the input signal-to-noise ratio. As a result, we find a fundamental limit to biochemical sensing for arbitrarily ultrasensitive systems. This new limit is strictly tighter than the Berg-Purcell limit and can be applied to both Poisson and non-Poisson input signals.

II. MODEL

The input of our model refers to a biochemical signal, X(t), which is fluctuating over time around a mean level. The input fluctuations may arise from, for example, the random birth (synthesis) and death (decomposition) of molecules. Without loss of generality, this input can be described by the following Langevin equation [21]:

dX(t)dt=-X(t)-cτx+η(X,t), (1)

where the parameter τx sets the time scale over which the input signal reverts to its mean level c. To prevent X(t) from being negative [21], the stochastic term in Eq. (1) is assumed to satisfy 〈η(X,t)〉 = 0 and 〈η(X,t)η(X′,t′)〉 = σ2X(t)δ(tt′). In steady state, the input signal is found to follow the Gamma distribution:

p(X=x)=βαxα-1e-βxΓ(α), (2)

with the shape parameter α ≡ 2c/(τxσ2) and the rate parameter β ≡ 2/(τxσ2). By Eq. (2), the stationary variance of X(t) is given by σx2=cτxσ2/2 and thus the Fano factor is simply σx2/c=1/β (the scale parameter). By tuning τx or σ, Eq. (1) can be used to describe both Poisson (β = 1) and non-Poisson (β ≠ 1) fluctuations. We also observe that α=βc=c2/σx2. So the shape parameter α can be interpreted as the signal-to-noise ratio. For most biological systems, it is expected that α ≫ 1 and hence the zero point is inaccessible, i.e., p(X = 0) = 0, by Eq. (2). As a common example, the input signal X(t) may refer to the number of molecules diffusing in an open volume. This can be described in our model by setting β = 1 (Poisson noise) such that α = c, which denotes the average number of molecules in the volume.

The output of our model contains N identical receptors, which independently bind the chemical ligands and switch between the on and off states. As a first step, we assume that these receptors are so close to each other in space that they experience the same local concentration. In this scenario, the effect of ligand diffusion is negligible. Since all the receptors are regulated by the same noisy input signal, they can be correlated. A good example could be multiple bacterial motors under the regulation of CheY-P in the same E. coli. Given the small size of bacteria, it could be a reasonable approximation to assume that all the motors experience the same CheY-P signal. In our general model, the switching process is governed by X(t) through the input-dependent transition rates k+(X) and k(X), which may be inferred from experiments. We denote the state of the ith sensor by Yi(t), which equals 1 (or 0) for the on (or off) state at time t. In many biological sensory systems, the input-output relationship exhibits ultrasensitivity and can be described by a Hill equation. In the deterministic case, this means

limσx0Yif(X=c)=k+(c)k+(c)+k-(c)=chch+Kdh, (3)

where h is the Hill coefficient describing the degree of the sensitivity and Kd is the dissociation constant at which f(Kd) = 1/2. Equation (3) constrains the possible forms of k±(X). For example, if the sensor has a constant off rate k = 1/(2τy), then the on rate has to be k+(X) = (2τy)−1 (X/Kd)h. This can be a coarse-grained model for the cooperative binding of transcriptional factors to a DNA promoter [19]. Another possible scheme is k+(X)=τy-1f(X) and k-(X)=τy-1[1-f(X)], which is appropriate for modeling the cooperativity of bacterial flagellar motors [22,23]. Different phenomenological forms of k±(X) have been studied in the model but do not alter our main results or conclusions given that the receptor time scale is sufficiently short. For convenience, we only report the results for the case of k+(X) = (2τy)−1 (X/Kd)h and k = (2τy)−1, which at h = 1 recovers the model we recently solved [21].

In the deterministic case (σx → 0), each sensor follows the simple telegraph process, which has an autocorrelation time equal to 1/[k+(X) + k] = τy at X = Kd. Thus, τy sets the sensor’s time scale and is assumed to be much shorter than the input correlation time, τx. Only in this scenario can the (slow) input noise appreciably affect the switching statistics of (fast) sensors [17,18,2124]. The presence of input noise will change the observed input-output relationship, making the average output 〈Yi〉 deviate from f(〈X〉 = c), as shown in Fig. 1(a). Nonetheless, the input-output relationship can still be approximated by a new Hill equation:

Yif(c)chch+Kdh, (4)

where the apparent Hill coefficient is the observed sensitivity that is less than the intrinsic Hill coefficient h. Numerically, it is found that the value of decreases with the Fano factor σx2/c [Fig. 1(a)]. For a given Fano factor (fixed β), increase with h but tends to saturate as h → ∞. We will return to explore this issue later on.

FIG. 1.

FIG. 1

(Color online) (a) The input-output relationship 〈Yi〉 versus 〈X〉 = c. The output 〈Yi〉 obtained from simulations (symbols) can be approximated by the Hill function (c) with the apparent Hill coefficient depending on the Fano factor σx2/c. (b) The distribution of ZT in a simulation with τy = 1, τx = 10, and T = 100. The solid red line is the fitting of a Gaussian distribution (with standard deviation 0.11).

III. RESULTS

A. Bounds on the output noise

Many biological sensing functions can be translated into the task of inferring the mean input level c from the observable output signal {Y1(t), …, YN(t)} over a finite time T. The statistical quantity of interest is:

ZT1T0T[1Ni=1NYi(t)]dt, (5)

which converges to a Gaussian distribution by the law of large numbers [Fig. 1(b)]. So we need to consider only the first two moments of ZT. As sensors are identical, we expect that 〈ZT〉 = 〈Yi〉 = (c), with the variance σy2Yi,Yi=f(c)[1-f(c)]. As different sensors experience the same input fluctuations, their switching events are correlated. We define the covariance σf2Yi(t),Yj(t) for ij and the correlation coefficient ρσf2/σy2. With the notation Δt = tt′, we can evaluate the variance of ZT as follows,

σz,T21T2N20Tdt0Tdti=1Nj=1NYi(t),Yj(t)=σy2NT2[Ay(Δt)+(N-1)Cy(Δt)]dtdt1Nσy2ξyT+N-1Nσf2ξxT, (6)

where Ay(Δt)Yi(t),Yi(t)/σy2 and Cy(Δt)Yi(t),Yj(t)/σy2 denote the normalized autocorrelation and cross-correlation functions. In the last step of Eq. (6), we have introduced the following time-averaging factors:

ξxT1T20Tdt0TdtAx(t-t), (7)
ξyT1T20Tdt0TdtAy(t-t), (8)

and used the approximation, Inline graphict) ≈ ρ · Inline graphict), with the insight that the output sensors should become uncorrelated as the input signal loses memory over time. This approximation has been verified by simulating the time traces of two sensors and calculating their correlation functions [Fig. 2(a)]. The correlation coefficient ρ between the output traces increases with the intrinsic sensitivity h and the relative noise level σx/c [Fig. 2(b)]. This can be seen by the assumption that all the receptors see the same slowly fluctuating input and by the use of small noise approximation: Yi(t),Yj(t)=f(X)2-f(X)2[f(c)]2σx2, which leads to

FIG. 2.

FIG. 2

(Color online) (a) The correlation functions Inline graphic, Inline graphic, and Inline graphic versus the time lag Δt. Parameters used in this simulation: h = 5, c = Kd, σx/c = 0.2, τy = 1, and τx = 10. (b) The correlation coefficient ρ versus the Hill coefficient h for different levels of input noise σx/c. (c) Inline graphic < Inline graphic, regardless of h. (d) Inline graphic < Inline graphic, regardless of σx/c.

ρ=σf2σy2[f(c)]2σx2σy2=h2f(c)2[1-f(c)]2σy2(σxc)2. (9)

Accordingly, Eq. (6) may be approximated as

σz,T2σy2NξyT+N-1Nh2f(c)2[1-f(c)]2(σxc)2ξxT, (10)

which is similar to the additive noise rule obtained in the previous literature [17,18,20]. However, it is worth noting that ρ is strictly bounded by one and cannot always scale as h2 or (σx/c)2, as shown in Fig. 2(b). Thus, Eq. (10) tends to overestimate the effect of input noise in ultrasensitive systems and even explodes when h → ∞.

The input X(t) has an exponential Inline graphic with correlation time τx. This allows us to calculate that

ξxT=2τxT[1-τxT(1-e-T/τx)]. (11)

For Tτx, we have ξxT2τx/T. Though there may be no analytical expression for ξyT, we can find its bounds. As shown in Fig. 2(a), the correlation functions satisfy a general inequality: Inline graphic < Inline graphic < Inline graphic for any time lag Δt. It is intuitive that the autocorrelation should always be larger than the cross correlation (i.e., Inline graphic > Inline graphic). The relation Inline graphic < Inline graphic follows from the condition τyτx and is valid regardless of the sensitivity h or the noise level σx/c [Figs. 2(c)–2(d)]: Inline graphic is dominated by the sensor’s intrinsic time τy over short time scales and by the input correlation time τx over long time scales; for Δtτy, Inline graphict) decays exponentially at the same rate as Inline graphic does [Fig. 2(a)]. Since Inline graphicρ · Inline graphic, the above inequality implies that ρ · Inline graphic < Inline graphic < Inline graphic and therefore ρ·ξxT<ξyT<ξxT. As a result, the variance σz,T2 in Eq. (6) must satisfy

σy2ξxTρ<σz,T2<σy2ξxT1+(N-1)ρN. (12)

This is one of our key results for biochemical sensing. Intuitively, as correlations between receptors arise only from the input fluctuations, the total noise of sensors should be limited by the input correlation property ( ξxT). Moreover, the more correlated the receptors are (i.e., larger ρ), the less efficient they are in averaging out the output noise. In the absence of input noise (ρ = 0), Eq. (12) reduces to 0<σz,T2<σy2ξxT/N, indicating that σz,T2 could be arbitrarily small as N → ∞. However, the presence of input noise will induce correlations between sensors and thus reduce the capability of population averaging, leading to a lower bound on the output noise. As ρ → 1, all the sensors perfectly synchronize their switchings and Eq. (12) suggests that σz,T2σy2ξxT, independent of N. We can define the effective number of sensors,

NeffN1+(N-1)ρ, (13)

which increases with N but saturates (Neff ≤ 1/ρ). Obviously, the lower bound in Eq. (12) is achieved as N → ∞. To verify the upper bound, we numerically generate thousands of sample points of ZT for various parameter sets. The sample variance σz,T2 for different N, h, or σx/c is indeed within the theoretical bounds and tends to saturate as the input noise increases (Fig. 3).

FIG. 3.

FIG. 3

(Color online) (a) σz,T2 versus σx/c for a single sensor with h = 5 and h = 10. In these simulations, we used τy = 1, τx = 10, and T = 100. By Eq. (12), the upper bound of σz,T2 for N = 1 is σy2ξxT0.05 at c = Kd. (b) σz,T2 versus σx/c for two sensors (N = 2) with h = 10 at c = Kd. Here, the upper and lower bounds of σz,T2 are σy2ξxT(1+ρ)/2 and σy2ξxTρ, respectively. Both bounds (dotted lines) depend on the correlation coefficient ρ which is shown in Fig. 2(b).

We can derive an intuitive understanding of Eq. (12) by considering the limit τy → 0, under which the sensors switch extremely fast such that the state of each sensor at any time can be regarded as a Bernoulli random variable:

Yi(t)={1,withprobabilityf(X(t)),0,withprobability1-f(X(t)), (14)

where f(X(t)) denotes the instantaneous probability to find the ith sensor being in the on state at time t. In this case, ρ=σf2/σy2 represents the instantaneous correlation coefficient between two sensors. The variance of Z(t)1Ni=1NYi(t) is found to be:

σz2=1Nσy2+N-1Nσf2=σy21+(N-1)ρN. (15)

As τzτx when τy → 0, we have σz,T2σz22τzTσz2ξxT which recovers the upper bound in Eq. (12).

B. Limit on the apparent sensitivity

As we mentioned before, in the presence of input noise, the apparent Hill coefficient () increases with the intrinsic Hill coefficient (h) and tends to saturate as h → ∞. Indeed, there exists a limit on , denoted by , which can be obtained by taking the limit h → ∞ and τy → 0. In this scenario, every sensor behaves as an indicator function:

Yi(t)={1ifX(t)Kd,0ifX(t)<Kd. (16)

The expectation of Yi is simply the tail distribution of the gamma probability density Eq. (2), i.e.,

Yif(c)=βαΓ(α)Kdxα-1e-βxdxΓ(α,βKd), (17)

where Γ(α,βKd) is also known as the regularized (upper) gamma function. Given an input distribution (fixed α and β), one can change Kd to probe the input mean level, c = α/β, with a sensitivity given by

|Γ(α,βKd)Kd|=e-βKd(βKd)αKdΓ(α)=p(X=Kd). (18)

This function reaches the maximum at Kd=c=1/β. Therefore, for α = βc ≫ 1, the sensor is most sensitive near c = Kd. Alternatively, for a given indicator sensor with fixed Kd, we can change c (with fixed β) to examine at which level of c the sensor is most sensitive to the input signal. By symmetry, the maximum sensitivity should be achieved around c* = Kd + 1/β. As shown in Fig. 4(a), the input-output relationship (c) = Γ(βc,βKd) can be approximated by a Hill equation, ch/(ch+Kdh), with the limit Hill coefficient given by

FIG. 4.

FIG. 4

(Color online) (a) The input-output relationship, 〈Yi〉 = Γ(βc, βKd), can be approximated by the Hill equation, ch/(ch+Kdh), with given by Eq. (19). (b) The limit Hill coefficient, , as a function of the signal-to-noise ratio α* = βc* = (c*/σx)2 at c* = Kd + 1/β for fixed β = 1.

h=4|Γ(βc,βKd)log(Kd)|c=c=4e-βKd(βKd)βKdΓ(βKd). (19)

Clearly, is determined by both the scale parameter β of the input distribution and the location parameter Kd of the sensor [Fig. 4(b)]. The factor of 4 in Eq. (19) arises from the fact that the maximum of the derivative of a Hill equation (with the Hill coefficient h) is given by h/4. The input signal-to-noise ratio at c* is α* = βc* = (c*/σx)2. So we have that βKd = βc* − 1 = α* − 1. Using the asymptotic formula Γ(x)2πe-xxx-12 for x ≫ 1, we can rewrite Eq. (19) as

h4βKd2π=8(α-1)π, (20)

which increases with the signal-to-noise ratio α* of the input near c = Kd [Fig. 4(b)]. The asymptotic scaling, h~α=c/σx by Eq. (20), suggests that is constrained by the relative intensity of input noise.

The above analysis suggests that, for any ultrasensitive sensors, the apparent sensitivity read from the input-output response is always bounded by the limit Hill coefficient, i.e., < . We have numerically tested this inequality in various parameter regimes and found that < holds in general [Fig. 5(a)]. It is also worth remarking that the above argument or conclusion does not depend on how we model the input process. The general limit < can be similarly obtained when assuming a different input distribution (e.g., a Poisson distribution).

FIG. 5.

FIG. 5

(Color online) (a) versus h for two different values of β: β = 1 (solid red line) and β = 1/3 (dashed blue line). (b) δc/c versus h at c = Kd for β = 1 (solid red line) and β = 1/3 (dashed blue line). The dotted lines correspond to the limit values of in (a) and δc/c in (b) under the limit h → ∞. We have used Kd = 100, τy = 1, τx = 10, and T = 100 in numerical simulations.

C. General limit to biochemical sensing

The mean input level c is encoded by the output sensory signal. By the law of large numbers, the statistic T should converge to Z¯T=f(c)ch/(ch+Kdh). Thus, a good estimator of c can be found by inverting T = (c). When the integration time T is sufficiently large, the output noise σz,T2 can be small enough so that one can use the error propagation formula to examine the accuracy (δc/c) of this estimator. By further using Eq. (12), we have that

ξxTρh2σy2<(δcc)2=σz,T2c2[dfdx(c)]-2<ξxTh2σy2Neff, (21)

where σy2(cKd)h/(ch+Kdh)2. Equation (21) shows that ultrasensitivity () could help the system read out the input level. However, this effect is limited due to the constraint < . Interestingly, increasing the intrinsic Hill coefficient h increases the sensitivity (), which tends to reduce the sensing error, but with fluctuations in the input it also increases the correlations between receptors (ρ) and hence decrease the effective number of sensors (Neff), which tends to raise the sensing error. This tradeoff demonstrates the interplay between the sensitivity and the efficiency of sensors when the input is noisy. From numerical experiments, it is also found that the sensing error overall decreases with increasing h but saturates as h → ∞, as can be seen in Fig. 5(b).

Is there some fundamental limit to biochemical sensing? If the input is directly observable and we estimate the mean concentration from the sample X(t) over a time window T, then the total variance is σx,T2=σx2ξxT and the sensing limit is (δc/c)2=σx,T2/c2=ξxT(σx/c)2. If X(t) is not directly observable and the mean level c has to be inferred from some output, then the actual sensing error must be larger than the sensing limit, i.e.,

(δcc)2>(σxc)2ξxT=ξxTβc. (22)

This inequality sets the lower bound for the measurement error in biological sensing systems. This lower bound is solely determined by the input properties (σx, c, and ξxT). In fact, the general inequality Eq. (22) can easily recover the Berg-Purcell limit. When X(t) refers to the number of molecules diffusing (with diffusion coefficient D) in an open volume of radius R, the input correlation time is [4]: τx=25R2/D, which is the typical time for molecules within the volume to be renewed by diffusion. For Tτx, we also have ξxT2τx/T. The number of molecules in this volume is a Poisson random variable, satisfying (σx/c)2 = 1/c where c ≡ 4π R3/3 is the average number of molecules in the volume with denoting the mean concentration. Plugging all these results into Eq. (22) leads to the following inequality,

(δcc)2>34πR3c¯·4R25DT=35πDRc¯T, (23)

which recovers the Berg-Purcell limit for the perfect monitor that counts the number of molecules inside itself [4]. According to Eq. (23), no matter how many or how sensitive receptors are used, the sensing error is always bounded from below by the counting noise. In fact, one can view Eq. (22) as a generalized form of the Berg-Purcell limit, as it can be applied to both Poisson and non-Poisson process.

Unlike the perfect monitor, an ultrasensitive sensor usually responds only to a narrow range of the input signal. Such sensors are able to provide accurate measurements near Kd, but become insensitive to the stimuli away from Kd. The Berg-Purcell limit does not capture such locally sensitive property, whereas our Eq. (21) does. One can obtain valuable insights by considering the extreme scenario where each sensor behaves as an indicator function Eq. (16). In this limit, we have ρ → 1, Neff → 1, and such that Eq. (21) becomes

(δcc)2=ξxTh2σ2, (24)

where σ2=Γ(βc,βKd)[1-Γ(βc,βKd)]. For βc ≫ 1, we have σ214 at c = Kd.

Comparing Eq. (24) with Eq. (22), one can see that if were larger than 2βKd, then the sensing limit given by Eq. (24) would be able to beat the fundamental limit Eq. (22) at c = Kd. However, using Eq. (19), one can verify that the inequality h<2βKd always holds for βKd > 1. By Eq. (20), we have that

2βKdhπ21.2533. (25)

In other words, the sensing error (δc/c) by an indicator sensor near Kd is roughly 25% higher than the sensing limit of a perfect monitor. This can be understood as the information gained by an infinitely sensitive receptor (h → ∞) is still less than the information collected by the perfect monitor, which records everything.

The above arguments lead to a new sensing limit near Kd for ultrasensitive systems:

(δcc)2>ξxTh2σ2πξxT8βKdΓ(βc,βKd)[1-Γ(βc,βKd)]. (26)

Figure 6 gives a numerical comparison of the lower bound of δc/c in Eq. (26) to that in Eq. (22). One can see that our newly derived sensing limit (dashed red line) is strictly larger than the Berg-Purcell limit (solid blue line). Interestingly, the presence of input noise helps increase the dynamic range of sensing: without the input noise, the ultrasensitive receptor would be very precise at c = Kd, but almost useless elsewhere; the presence of noise effectively increases the accuracy of signaling in the neighborhood of Kd. We also plot in Fig. 6 the upper bound (dotted green line) of δc/c given by Eq. (21) for a single sensor (N = 1) with the apparent Hill coefficient . The key implication of the above theoretical arguments is that, for a sensor with known Kd and the apparent sensitivity , its accuracy of biochemical measurements near Kd should be bounded as follows:

ξxThσ<δcc<ξxThσy. (27)

FIG. 6.

FIG. 6

(Color online) A numerical comparison between the lower bound (solid blue line) in Eq. (22) and the lower bound (dashed red line) in Eq. (26). The dotted green line corresponds to the upper bound of δc/c in Eq. (21) for a single sensor (N = 1) with = 10. The black symbols represent the Monte Carlo simulation results of δc/c by a sensor with the apparent sensitivity ≈ 10 under the Poisson noise (i.e., β = 1) at different mean concentration levels. The sensing error δc/c was calculated using the error propagation formula in Eq. (21). These black symbols fall between the bounds as indicated by Eq. (27). Other parameters used here include Kd = 100, τy = 1, τx = 10, and T = 100.

The above simple inequality has been tested by numerical simulations (black symbols in Fig. 6).

IV. CONCLUSION

In conclusion, we have presented results, both analytical and numerical, for biochemical sensing in ultrasensitive systems. First, we demonstrate that biochemical noise does not accumulate additively but is largely limited by the noise-induced correlated property of ultrasensitive sensors. This contrasts with previous belief from the linear-noise approximation, which tends to exaggerate the impact of the input noise. Second, we show that the input noise also constrains the apparent sensitivity of the system. Thus, although ultrasensitivity is able to improve the sensing accuracy, it cannot beat the physical limit set by the molecular counting noise. Finally, our analysis leads to a sensing limit for ultrasensitive systems, which is more appropriate for ultrasensitive signaling systems. In sum, the results we present in this paper provide insights about the interplay between noise and ultrasensitivity in biochemical signal detection.

Acknowledgments

We would like to thank Yuhai Tu, David A. Kessler, and two anonymous referees for all the valuable comments in the earlier stage of this work. This work was supported by the National Science Foundation (NSF) Grant No. DMS-1068869, by the National Institutes of Health (P01 GM078586), and by the NSF Center for Theoretical Biological Physics (Grant No. NSF PHY-1308264). H.L. was also supported by the CPRIT Scholar program of the State of Texas.

References

RESOURCES