Abstract
The spatial direct method with gradient-based diffusion is an accelerated stochastic reaction-diffusion simulation algorithm that treats diffusive transfers between neighboring subvolumes based on concentration gradients. This recent method achieved a marked improvement in simulation speed and reduction in the number of time-steps required to complete a simulation run, compared with the exact algorithm, by sampling only the net diffusion events, instead of sampling all diffusion events. Although the spatial direct method with gradient-based diffusion gives accurate means of simulation ensembles, its gradient-based diffusion strategy results in reduced fluctuations in populations of diffusive species. In this paper, we present a new improved algorithm that is able to anticipate all possible microscopic fluctuations due to diffusive transfers in the system and incorporate this information to retain the same degree of fluctuations in populations of diffusing species as the exact algorithm. The new algorithm also provides a capability to set the desired level of fluctuation per diffusing species, which facilitates adjusting the balance between the degree of exactness in simulation results and the simulation speed. We present numerical results that illustrate the recovery of fluctuations together with the accuracy and efficiency of the new algorithm.
INTRODUCTION
Many cellular processes and biological signaling machinery involve different molecular species whose populations change over time as they react and diffuse within a spatial domain.1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 In biological cells where molecular populations are small, individual reaction events can lead to significant changes in populations of reactant species, which in turn amplifies their intrinsic stochastic effects.15, 16, 17, 18, 19, 20, 21 Intracellular diffusion of molecular species can also have stochastic effects due to their heterogeneous distribution and localization within the cell as well as their interactions with downstream signaling molecules.1, 2, 5, 7, 8, 9, 14, 22
A stochastic simulation approach enables a computational study of stochastic events arising from the small numbers and heterogeneous distribution of molecular species within the reaction-diffusion system. Gillespie's stochastic simulation algorithm (SSA) is an exact method that numerically simulates the time evolution of integer numbers of molecular populations in a well-stirred (i.e., spatially homogeneous) chemically reacting system in a thermal equilibrium.23, 24 The SSA has been extended to a spatially non-homogeneous domain to incorporate diffusion by dividing the spatial domain into a set of well-stirred subvolumes and treating diffusion jumps between neighboring subvolumes as a set of first-order reactions.25, 26, 27
The spatial SSA bridges the gap between the traditional deterministic formulation and the microscopic Brownian dynamics formulation of reaction-diffusion systems. The deterministic formulation uses partial differential equations and represents molecular populations by continuously varying and spatially dependent molecular concentrations.28, 29 The Brownian dynamics formulation tracks individual diffusing molecules and effects a reaction event when reactant species collide.30, 31, 32, 33, 34 The deterministic formulation, which is valid in the thermodynamic limit of large numbers of molecules, cannot account for the molecular discreteness and stochasticity. The Brownian dynamic formulation can depict the trajectory of a reaction-diffusion system at a microscopic scale, but becomes computationally intractable when simulating a system with more than 106 molecules.33, 35, 36
A simulation run of the spatial SSA describes a spatiotemporal trajectory of the system subvolumes’ molecular populations over the simulation time period of the corresponding reaction-diffusion master equation (RDME).37, 38, 39 The RDME, spatially extended from the chemical master equation (CME), is also an exact formulation of reaction-diffusion systems, but in most cases the master equations cannot be solved analytically due to the high dimensionality of the state space.28, 40 The RDME and the spatial SSA are valid when the system subvolumes are assumed to be well-stirred.41, 42, 43, 44 The well-stirred assumption is satisfied when the time interval between a typical reaction event that occurs within a subvolume is greater than the time interval between a typical diffusion event that occurs between two adjacent subvolumes.42, 45 Simulating all of these frequent diffusion events between subvolumes in addition to simulating every reaction event in all subvolumes further increases the computational load and makes simulations slow.
In order to accelerate stochastic simulations of chemically reacting systems, alternative but still exact formulations of the SSA have been developed to reduce the time spent per simulation time-step.46, 47, 48, 49, 50 However, simulating every reaction event by an exact, although efficient, algorithm can still be too slow.48 Therefore, approximate formulations of the SSA have been developed to reduce the total number of simulation time-steps.51, 52, 53, 54, 55, 56, 57, 58, 59, 60
To mitigate the added computational burden in simulating reaction-diffusion systems, deterministic-stochastic hybrid methods45, 61, 62 and stochastic-stochastic hybrid methods62, 63, 64, 65 have been developed. Deterministic-stochastic hybrid methods treat diffusion events deterministically and reaction events stochastically. The stochastic-stochastic hybrid methods simulate reaction and diffusion events independently of each other by alternating the simulation of each type of event at predetermined intervals of time. Although these hybrid methods are sometimes faster to simulate than an exact or approximate spatial SSA, it has been shown that treating diffusion and reaction events separately in a predetermined time-step interval and/or adaptively calculated next time-step interval may introduce errors that impact not only the accuracy in simulation mean values but on occasion efficiency because, for example, ignoring outgoing diffusion events during the next time-step calculation may lead to an inadvertently small time-step value.66, 67
Taking into account that calculating the next time-step interval needs to consider both reaction and diffusion events in an integrated manner in order to achieve accuracy, we have recently presented the spatial direct method with gradient-based diffusion68 to accelerate the spatial SSA. The spatial direct method with gradient-based diffusion is an approximate formulation of the spatial SSA that reduces the total number of simulation time-steps by sampling only the net or observed diffusion events based on concentration gradients between neighboring subvolumes. The method gives accurate simulation mean values compared with those obtained with an exact spatial SSA. We have observed that the method markedly reduces both the number of simulation time-steps and simulation time for the reaction-diffusion systems tested.
Quantifying the accuracy of an approximate SSA, whether spatial or non-spatial, involves determining whether the approximate method correctly captures not only the mean behavior of molecular populations but also their probability distributions compared with an exact SSA. Different simulation runs of the approximate spatial SSA with different random seeds give different spatiotemporal outcomes whose mean values and/or distributions can be compared with those obtained from the exact spatial SSA. The fluctuations in the different outcomes represent the stochasticity in molecular populations and underlie the motive for using the SSA. Approximate formulations of the SSA, especially those that take large time steps for some fast reactions, trade a speedup in simulation time for some sacrifices in capturing the correct fluctuations in species involved in the fast reactions.69, 70 Similarly, we have observed that the spatial direct method with gradient-based diffusion gives reduced fluctuations of diffusing species.68
In this paper, we address how to recover the fluctuations of interest that are reduced due to the gradient-based diffusion method. We derive the full probability distribution of diffusing species from the maximum caliber principle71 and use the distribution to retain fluctuations of interest. The improved algorithm that incorporates this new approach achieves a reduction in the total number of simulation time-steps and simulation time whose magnitude is inversely related to the level of desired fluctuations to be retained. The new algorithm provides a capability to set the desired fluctuation level per diffusing species, which facilitates finding a good balance between the level of exactness in simulation results and the simulation speed.
In the subsequent sections, we first explain relevant previous work that includes Gillespie's direct SSA23 and its spatial extension,25, 27, 72, 73 our spatial direct method with gradient-based diffusion,68 and the maximum caliber principle.71 We then describe our new technique to derive and retain desired levels of fluctuations in diffusing species of interest, and the improved spatial direct method with gradient-based diffusion that incorporates the new technique. We show numerical results that demonstrate that our new algorithm is able to successfully retain fluctuations of diffusing species of interest while reducing the total number of simulation time-steps and simulation time.
PREVIOUS WORK
Gillespie's direct method and its spatial extension
Gillespie's direct method23 is an exact stochastic simulation algorithm to simulate the time evolution of a system of N molecular species {S1, …, SN} that are involved in M chemical reaction channels {R1, …, RM}in a well-stirred volume Ω in thermal equilibrium. The state vector X(t) = (X1(t), …, XN(t)) denotes the number of molecules of species Si (i = 1, 2, …, N) in Ω at time t. The dynamics of each reaction channel Rj (j = 1, 2, …, M) is characterized by a propensity function aj and the state change vector vj. Given x = X(t), aj(x)dt = cjhj(t)dt gives the probability that an Rj reaction will occur in Ω in the next time interval [t, t + dt) where cjdt denotes the average probability that one particular combination of Rj reactant molecules will react in the next time interval [t, t + dt), and hj(t) is the number of distinct combinations of Rj reactant molecules found to be present in Ω at time t. The state change vector vj = (v1j, …, vNj) gives the change in the molecular populations induced by one Rj reaction.
The joint probability density function p(τ, μ|x, t) gives the probability that the next reaction occurs in the infinitesimal time interval [t + τ, t + τ + dτ) and is a Rμ reaction:
At each time-step, two random numbers r1 and r2 from the uniform distribution in the unit interval are generated. The next reaction occurs at t + τ where
| (1) |
and the index μ of the next reaction is the smallest integer that satisfies
| (2) |
The state vector is then updated by X(t + τ) = x + vμ, and the process repeats until the end of the desired simulation time. Each simulation run generates a trajectory of the following CME that describes the time evolution of P(x, t|x0, t0), the probability that X(t) = x given X(t0) = x0,
| (3) |
A spatial extension of Gillespie's direct method partitions the system volume Ω into NV cubical subvolumes {V1, …, }, each of which is assumed to be well-stirred. Each subvolume has its own list of N molecular species that are involved in M reactions as described above for the non-spatial direct method. Diffusive transfers of molecular species Si from subvolume Vg to a neighboring subvolume Vf are denoted as unimolecular reactions, ,25, 26, 27, 72 and these unimolecular reactions are appended to the list of M reactions taking place in the subvolume. The diffusion propensity of the first-order reaction is given by · where = Di/u2 and = ; Di is the diffusion rate constant of Si, u is the side length of the cubic subvolumes, and is the number of Si molecules in Vg at time t, . Gillespie's direct method is then applied to the entire system that contains N · NV molecular species involved in the reaction channels comprising NV appended lists of reactions.
Spatial direct method with gradient-based diffusion
The spatial direct method with gradient-based diffusion68 is an approximate formulation of the spatially extended Gillespie's direct method. As described above, the exact formulation treats diffusive transfers between subvolumes as two independent birth or death processes. The spatial direct method with gradient-based diffusion takes into account the fact that the movement of molecules by diffusion occurs along the concentration gradient from a higher concentration to a lower concentration as described by the diffusion equation:
| (4) |
where J = −Di∇Si by Fick's law. Accelerated simulation is achieved by calculating the diffusion propensity · by using the same = Di/u2 as above, but defining to incorporate the concentration gradient that directs the net movement of molecules described in the diffusion equation:
| (5) |
· dt then gives the probability that a net movement of an Si molecule by diffusion will occur from Vg to Vf in the next time interval [t, t + dt). Because as defined in Eq. 5 is always less than or equal to , the diffusion propensity · is always less than or equal to . This decrease in the diffusion propensity value leads to a decrease in a0(x) and ultimately to a greater step size τ.68
The maximum caliber principle
The maximum caliber principle provides a solution to a statistical inference problem by reformulating nonequilibrium statistical mechanics in terms of information theory.71, 74 It extends Jaynes’ principle of maximum entropy that predicts a system's observable macrostate in equilibrium by identifying its underlying microstate probabilities as those that are consistent with the measured average quantity or other constraints and, at the same time, maximize the information theory entropy.71, 74, 75, 76 Given a set of macroscopically measured quantities {x1, x2, …, xN} of a system in equilibrium and constraints that = 1 and where p(xi) denotes the probability of the system in microstate i, there is insufficient information to obtain the unknown p(xi)'s since there are N unknown p(xi)'s but only two constraints. The principle of maximum entropy71, 74 states that the best probability distribution we can infer based on the given information is the one that maximizes the information theory entropy, . Jaynes and others71, 74, 77, 78 have shown that, when ⟨x⟩ above denotes the average energy ⟨E⟩ of a system in equilibrium, solving for p(xi)'s that maximize the entropy via the use of Lagrange multipliers gives the Boltzmann distribution:
The principle of maximum entropy is applicable to systems that are not in equilibrium because the information theory entropy is a measure of uncertainty in any probability distribution and not limited to equilibrium systems.71, 74, 77 For a system that is evolving toward equilibrium, Jaynes extended the principle of maximum entropy to the maximum caliber principle in which p(xi) denotes the probability of the system's microtrajectory i (instead of microstate i) and the macroscopically measured quantities {x1, x2, …, xN} are time-dependent.71, 77, 79 Thus, according to the maximum caliber principle, given a set of macroscopically measured, time-dependent quantities {x1(t), x2(t), …, xN(t)} of a system and other constraints such as the average and variance of those quantities, the best probability distribution of microtrajectories that we can infer based on the given information is the one that maximizes the information theory entropy, , where p(xi(t)) denotes the probability of the system's microtrajectory i at time t. When there is only one constraint such that = 1, maximization of the entropy yields that all possible microtrajectories are equally likely during the system's evolution from t to t + dt. However, additional constraints, such as where ⟨x(t)⟩ denotes the observed average value at time t, leads to the probability distribution based on such constraints.77
IMPROVED SPATIAL DIRECT METHOD WITH GRADIENT-DIFFUSION WITH RECOVERY OF FLUCTUATIONS IN DIFFUSING SPECIES OF INTEREST
Our objective is to recover fluctuations in diffusing species of interest that are lost due to gradient-based diffusion while still retaining the advantage of improved simulation speed by reducing the number of time-steps needed to complete a simulation run. We accomplish our objective by first enumerating all microscopic fluctuations associated with macroscopic average flux and calculating their probabilities, and then incorporating this finding into our algorithm by a computationally efficient estimate of the degree of fluctuations. We first state the assumptions and definitions to be used. We next show how to derive and estimate fluctuations. We finally present the improved algorithm that is able to recover the desired level of fluctuations in diffusing species.
Assumptions and definitions
Assume a closed volume Ω with von Neumann boundary conditions on dΩ that contains N molecular species {S1, …, SN} involved in M chemical reaction channels {R1, …, RM} and is divided into NV cubical subvolumes {V1, …, }. Assume that the size of each subvolume is small enough that it is well-stirred. Further assume that nd molecular species (1 ≤ nd ≤ N) diffuse with diffusion coefficients greater than zero. Let the state vector Xg(t) = (, …, ) denote the number of molecules of Si (i = 1, 2, …, N) in subvolume Vg (g = 1, 2, …, NV) at time t. Then X(t) = (X1(t), …, ) denotes the NV by N state matrix of the system at time t. N molecular species in each subvolume Vg are involved in M chemical reaction channels {R1, …, RM} and nd · ng diffusion channels {Figf} where ng is the number of Vg's neighboring subvolumes; i ∈ Id where Id is the set of indices of nd molecular species with diffusion coefficients greater than zero; f denotes the index of each neighboring subvolume of Vg, f ∈ {1, …, NV} (f ≠ g). The dynamics of each diffusion channel Figf is then characterized by a propensity function bigf(x)dt = · dt, and gives the change in the molecular populations induced by one Figf diffusion where denotes the NV by N matrix whose entries are all zero except for the entry (g, i).
Finding distribution of microscopic trajectories and calculating their probabilities
Our goal is to retain the fluctuations in diffusing species of interest that the spatial direct method with gradient-based diffusion loses due to its sampling of only the average flux. Fluctuations are approximated as a Gaussian noise term in Langevin and Fokker-Planck equations that assume sufficiently large molecular populations.29, 80 However, when the number of molecules is small, the Gaussian noise approximation is not always suitable.80, 81 We use the maximum caliber principle77 to derive the full probability distribution of microtrajectories that are consistent with both the observed macroscopic values and microscopic fluctuations in the system.
Suppose there are Xig(t) and Xif(t) molecules in Vg and Vf, respectively, at time t. Further suppose that species Si has a nonzero diffusion coefficient, and that Vg and Vf are adjacent. Without loss of generality, we also assume that Xig(t) ≥ Xif(t). Consider a time interval [t + Δt) where Δt is small enough such that each Si molecule can jump to at most one neighboring subvolume in [t + Δt). The average particle flux in [t + Δt) from Vg to Vf will be (Xig(t) – Xif(t))··Δt. Microscopically, within the same time interval [t + Δt), there are (Xig(t) + 1) different trajectories that xig of Xig(t) molecules can jump to the neighboring subvolume Vf with xig = 0,1, …, Xig(t), and (Xif(t) + 1) different trajectories that xif of Xif(t) molecules can jump to the neighboring subvolume Vg with xif = 0, 1, …, Xif(t). The probability that xig of Xig(t) molecules diffuse to Vf in [t + Δt) is b(xig; Xig(t), ) where = ·Δt and b(k;n,p) denotes the binomial probability
Similarly, the probability that xif of Xif(t) molecules diffuse to Vg in the next time interval [t + Δt) is b(xif; Xif(t), ). Note that is dimensionless because and Δt are in units of [time−1] and [time], respectively. Diffusive jumps of xig molecules from Vg and the diffusive jumps of xif molecules from Vf are independent events since and are unimolecular reactions involving independent reactant populations. The probability that two independent events both occur is the product of the probabilities of each event.86 Therefore, the probability that xig molecules diffuse from Vg to Vfand xif molecules diffuse from Vf to Vg in [t + Δt) is
| (6) |
Because there are (Xig(t) + 1)·(Xif(t) + 1) different pairs of (xig, xif) where xig = 0, 1, …, Xig(t) and xif = 0,1, …, Xif(t), we can enumerate (Xig(t) + 1)·(Xif(t) + 1) different ways that xig of Xig(t) molecules diffuse to Vf and xif of Xif(t) molecules diffuse to Vg. We are interested in the distribution of the number of molecules, (xig – xif), that move between the subvolumes rather than in all distinct permutations of (xig, xif) that give rise to the same (xig – xif) value. Letting j(t) = (xig – xif) where (xig – xif) ∈ {– Xif(t), – Xif(t) + 1, …, Xig(t)}, we calculate p(j(t)), which is the sum of binomial probabilities shown in Eq. 6 over all pairs of (xig, xif) such that j(t) = xig – xif:
| (7) |
All of p(j(t))'s then give rise to the set of probabilities of all possible microtrajectories of the diffusive flux of Si molecules between Vg and Vf in [t + Δt). Positive values of j(t) indicate microtrajectories where molecules diffuse along the concentration gradient between the two subvolumes, whereas negative values of j(t) indicate microtrajectories where molecules diffuse against the concentration gradient. The set of p(j(t))'s is also the full probability distribution that is inferred by the maximum caliber principle71 and predicts the macroscopic average flux to be digf · higf(t) · Δt, which agrees with both Fick's law77 and the diffusion propensity in our spatial direct algorithm with gradient-based diffusion.68
Estimating and recovering fluctuations
The probability distribution of microtrajectories obtained above gives different probability values of the system following different microtrajectories. The spatial direct method with gradient-based diffusion only follows the macroscopic average trajectory, and subsequently its simulation ensemble does not have fluctuations resulting from different microscopic trajectories. Intuitively, we expect that the degree of fluctuations in the distribution becomes smaller when the probability p(j(t) = higf(t)) is larger, that is, when the sum of the probabilities of those microtrajectories corresponding to the macroscopic average trajectory is larger. Similarly, we also expect that the degree of fluctuations in the distribution becomes smaller when the cumulative probability P(higf(t) – Δj ≤ j(t) ≤ higf(t) + Δj) is larger for some small Δj. The drawback in using these measures to estimate the degree of fluctuations is that we must calculate one or more p(j(t))'s as shown in Eq. 7 for different j(t)’s, and this calculation can incur a high computational overhead. Calculating the binomial probability b(k;n,p) is not always practical unless n is a small integer because n! may cause overflow of computer's floating-point representation.85 Moreover, even with the use of the logarithms and the gamma function Γ(n + 1) = n! to calculate the binomial coefficient, repeated evaluation of the exponential function can be computationally expensive.85 Incurring too much computational overhead can lead to a longer simulation time even when the number of total time-steps is reduced, as observed with the nonnegative tau-leap strategy.36, 52, 68
Instead, we use two well-known properties of probability to estimate the degree of fluctuations in the probability distribution of microtrajectories. The first property is that the mean and variance of a binomial distribution with the probability mass function b(k;n,p) are np and np(1 – p), respectively. The other describes the property of a linear combination of two independent random variables R1 and R2 such that if R1 and R2 have mean values μ1 and μ2, and variances σ12 and σ22, respectively, the mean and variance of R1 − R2 are μ1 − μ2 and σ12 + σ22, respectively.86 From these properties, the microtrajectory probability distribution of Si between two volumes Vg and Vf in [t + Δt) has mean μi(t) and variance σi2(t):
We then estimate the degree of fluctuations by calculating the index of dispersion,87, 88 which measures how dispersed or clustered the underlying dataset is, and is defined as variance divided by the mean value:
| (8) |
The reciprocal of the left-hand side of Eq. 8 appears in the flux-fluctuation theorem89 that gives the ratio of the probabilities of fluxes along and against the concentration gradient as
| (9) |
where x(t) > 0.
Equation 8 shows that when is small and is large, the amount of variability relative to the mean and the index of dispersion increase. When the index of dispersion increases, Eq. 9 indicates that the probability of observing a microtrajectory that is against the concentration gradient becomes exponentially more probable.82, 83, 84 We can impose a lower bound β on Eq. 8 to indicate that when σi2(t)/μi(t) is greater than β, the degree of fluctuations is high. σi2(t)/μi(t) > β can be rewritten as
| (10) |
In addition,
| (11) |
Combining 10, 11, and introducing a new parameter α (0 ≤ α ≤ 1) to replace (1 – digf)/β, we establish the following condition to denote whether the degree of fluctuations in the probability distribution is high.
| (12) |
This condition is incorporated into the spatial direct method with gradient-based diffusion such that when the condition is satisfied for a chosen value of α, diffusion is simulated by using the exact strategy; otherwise, the gradient-based diffusion strategy is used. When α is set to zero, all diffusive transfers are simulated via the gradient-diffusion strategy whereas when α is set to one, all diffusive transfers of Si molecules are simulated via the exact strategy. In practice, we have found that setting α to be 0.4 recovers most fluctuations while slightly reducing the number of total time-steps.
Improved spatial reaction-diffusion algorithm that retains diffusive fluctuations
Based on Eq. 12, we define a new diffusion propensity function γigf(x)dt as γigf(x)dt = × where
when ,
-
when , if ,
else
The parameter αi determines the degree of fluctuations that is retained in the diffusing species Si between the full amount given by a simulation ensemble via the exact algorithm and the reduced amount given by a simulation ensemble via the direct spatial method with gradient-based diffusion. The parameter can be specified for each diffusing species.
Our new improved algorithm can be summarized as follows:
At t = t0, initialize the system's state to x = X(t0). Calculate all and γigf(x) and their sum a0(x).
- Iterate while (t < tend).
- Compute the next time-step τ for all reactions and diffusions.
- Obtain the index of the next reaction or diffusion, and advance the system.
- Update t and x.
- Recalculate those and γigf(x) whose reactant species’ populations changed and update a0(x).
NUMERICAL RESULTS
We have applied the algorithm to three different test cases, one involving 1D diffusion and two involving 2D reaction-diffusion. Results from all three test cases confirm that our new algorithm can successfully retain fluctuations in diffusing species of interest. They also show that the reduction in the number of total time-steps is inversely related to the level of fluctuations retained. We compare the accuracy of simulation results and the total average number of time-steps taken of the new algorithm (Fluctuation-Gradient) with those of the spatial direct method (Direct) and the spatial direct method with gradient-based diffusion (Gradient).
To quantify accuracy, three measures are used. The first measure is the space-averaged Kolmogorov distance90 to quantify the distance between two cumulative distributions, F1(x) and F2(x), which is defined as
| (13) |
where . Each distribution function F(x) is approximated by an empirical distribution function of a sample of size 1000 or 500. The Kolmogorov self-distance90 is defined as the Kolmogorov distance between two independent samples obtained from the exact simulation algorithm and is observed to be not arbitrarily small even when the sample size is as large as 10 000. It represents the amount of noise due to the natural fluctuations in the system.90 This inherent noise makes the Kolmogorov-Smirnov (K-S) test reject the null hypothesis that two samples obtained from the Direct method are drawn from the same distribution. Therefore, in lieu of the K-S test, we compare the Kolmogorov distance with the self-distance. The second measure computes the space-averaged mean number of species S:
| (14) |
where denotes the number of Si molecules in subvolume Vg at time t, averaged over the simulation ensemble. The third measure computes the space-averaged variance of the S population:
| (15) |
where {} denotes the simulation ensemble. We compare the values of Smean and Svar obtained from the Fluctuation-Gradient and Gradient methods with those from the Direct method. In addition to these three measures, to provide additional spatial information we also compare the cumulative distribution function (cdf) in a subset of subvolumes for the Fluctuation-Gradient, Direct, and Gradient methods.
Diffusion in 1D
The 1D diffusion problem is defined by a species A whose diffusion coefficient is 100 μm2 s−1 on a 5 μm × 0.5 μm × 0.5 μm volume. The simulation volume is divided into 10 × 1 × 1 subvolumes. Initially, AI molecules of A are all placed in the leftmost subvolume. Starting from this initial condition, the problem is numerically simulated for 150 ms. Simulations are run with AI = 500 and AI = 5000. Each simulation ensemble consists of 1000 runs.
Figure 1 shows the ensemble average numbers of A molecules for both AI = 500 and AI = 5000 over 10 subvolumes numbered 1–10 from left to right at three different time points: t = 10 ms, 50 ms, and 150 ms. For each AI, ensemble averages were obtained from three sets of simulations by using three different methods: Fluctuation-Gradient with α = 0.4, Direct, and Gradient. We also plotted the results obtained by a deterministic solution. Figure 1 shows an excellent agreement between all of the ensemble means and the deterministic solution.
Figure 1.
Ensemble mean numbers of A molecules over 10 subvolumes from left to right that are arranged in increasing distance from the leftmost subvolume at three different times in simulation: t = 10, 50, and 150 ms. The left panel shows results with 500 molecules located in the leftmost subvolume initially. The right panel shows results with 5000 molecules located in the leftmost subvolume initially. The ensemble means from three different methods, Fluctuation-Gradient (α = 0.4), Exact, and Gradient are plotted together with the deterministic solution. The legend is the same for all plots.
Figure 2 shows space-averaged Kolmogorov distances (top row) and space-averaged variances (bottom row) of species A with AI = 500 and AI = 5000. Space-averaged Kolmogorov distances of pairs (1) Direct and Direct (self-distance), (2) Direct and Fluctuation-Gradient (FG with α = 0.1, 0.2, 0.3, and 0.4), and (3) Direct and Gradient are plotted. Also plotted are space-averaged variances of ensembles obtained by Direct, Fluctuation-Gradient (FG with α = 0.1, 0.2, 0.3, and 0.4), and Gradient methods. Space-averaged variances are markedly reduced with the Gradient method, and the space-averaged Kolmogorov distances of the Direct and Gradient pair diverge from the self-distance pair. Different initial conditions (i.e., AI = 500 and AI = 5000) and different values of α illustrate how the condition in Eq. 12 determines the degree of desired fluctuation. As α increases from 0 (Gradient) toward 1 (Direct), the degree of fluctuations retained in the simulations ensemble increases. This effect is most visible between t = 0 and t = 50 when the gradient across the volume is relatively large due to the uneven distribution of molecules as represented in Fig. 1 top panel (t = 10 ms). After 50 ms, as molecules become almost evenly distributed across the volume and the concentration gradient becomes small, the degree of fluctuations is better retained even with smaller values of α.
Figure 2.
Quantification of accuracy for the 1D diffusion problem. Space-averaged Kolmogorov distances (top row) and space-averaged variances (bottom row) over time for species A with 500 initial molecules in the leftmost subvolume are shown on the left, and for species A with 5000 initial molecules in the leftmost subvolume are shown on the right. The Kolmogorov distances are calculated for six pairs of samples: (1) Direct and Direct (self-distance), (2) Direct and Fluctuation-Gradient (FG) with α = 0.4, (3) Direct and FG with α = 0.3, (4) Direct and FG with α = 0.2, (5) Direct and FG with α = 0.1, and (6) Direct and Gradient. Space-averaged variances are calculated for six different methods: Direct, FG with α = 0.4, FG with α = 0.3, FG with α = 0.2, FG with α = 0.1, and Gradient.
Figure 3 shows the cdf of molecule A for AI = 500 in three different subvolumes—the leftmost subvolume (subvolume 1), the middle subvolume (subvolume 5), and the rightmost subvolume (subvolume 10)—at three different time points: t = 10 ms, 25 ms, and 50 ms. The cdfs were obtained from the Direct, Fluctuation-Gradient (FG with α = 0.1, 0.2, 0.3, and 0.4), and Gradient methods. The time points and spatial locations were selected to further compare the ensemble distributions when the Kolmogorov distances and the spatial-averaged variances as shown in Fig. 2 diverge most. As α increases from 0 (Gradient) toward 0.4 (FG with α = 0.4) and the degree of fluctuations retained in the simulation ensemble increases, the cdf from the FG method overlaps the cdf from the Direct method more closely. The discrepancy between cdfs from the FG and Direct methods is most prominent when the concentration gradient across the subvolumes is relatively large as seen in subvolume 1 at 10 ms and subvolume 5 at 25 ms.
Figure 3.
Cumulative distribution functions (cdfs) for species A with 500 initial molecules in the leftmost subvolume (subvolume 1), middle subvolume (subvolume 5), and rightmost subvolume (subvolume 10) at three different time points: t = 10 ms, 25 ms, and 50 ms. The cdfs are calculated for six different methods: Direct, FG with α = 0.4, FG with α = 0.3, FG with α = 0.2, FG with α = 0.1, and Gradient.
Figure 4 shows the average total number of time-steps (and 10 times standard deviation for ease of visualization) taken by Direct, FG (with α = 0.1, 0.2, 0.3, and 0.4), and Gradient methods for both AI = 500 and AI = 5000. As α is increased to retain a higher degree of fluctuations, the average total number of time-steps also increases. The average total number of time-steps taken by the Gradient method (α = 0) is an order of magnitude smaller than that taken by the Direct or Fluctuation-Gradient methods even with α = 0.1. This indicates that the average macroscopic trajectory taken by the Gradient method averages over a large number of microtrajectories.
Figure 4.
For two initial conditions (AI = 500 and AI = 5000), average total number of time-steps and 10 times standard deviation are plotted for simulation ensembles obtained by six different methods: Direct, FG with α = 0.4, FG with α = 0.3, FG with α = 0.2, FG with α = 0.1, and Gradient. Standard deviations are multiplied by 10 for ease of visualization.
Reaction-diffusion in 2D: Cyclic adenosine monophosphate (cAMP) activation of protein kinase A (PKA)
The 2D reaction-diffusion problem describes cAMP activation of PKA, which is part of a highly prevalent mammalian signaling pathway that translates an extracellular signaling molecule into an intracellular or nuclear response.91, 92 Protein kinase type A is a tetrameric protein comprised of two regulatory subunits (PKAr) and two catalytic subunits (PKAc). Activation of PKA occurs when two cAMP molecules bind to each regulatory subunit, followed by the dissociation of the catalytic subunits. PKAc has many downstream targets, including ion channels, synaptic channels, and transcription factors.
The cAMP activation of PKA problem is defined by
on a 10 μm × 11 μm × 1 μm volume. The simulation volume is divided into 10 × 11 × 1 subvolumes. The reaction rate constants are: k1 = 8.696 × 10−5 nM−1 s−1, k2 = 0.02 s−1, k3 = 1.154 × 10−4 nM−1 s−1, k4 = 0.02 s−1, k5 = 0.016 s−1, and k6 = 0.0017 nM−1 s−1. The diffusion coefficient for cAMP is 300 μm2 s−1; the diffusion coefficient for all other species is zero. The system is initialized with 33 000 molecules of PKA evenly distributed across the subvolumes and 33 000 molecules of cAMP placed in the lower-right subvolume. The system is numerically simulated for 200 ms, and each simulation ensemble consists of 1000 runs.
Figure 5 shows the ensemble average numbers of PKA, cAMP, and the dissociated catalytic subunit of PKA (PKAc) over space at the end of simulation. Figure 5 shows an excellent qualitative agreement between all of the ensemble means, which is further demonstrated with the space-averaged means in Fig. 6 (middle row).
Figure 5.
Ensemble average numbers of PKA, cAMP, and PKAc molecules over space for the cAMP activation of PKA problem at the end of simulation, t = 200 ms. The simulation methods are, from top to bottom: Fluctuation-Gradient, Direct, and Gradient.
Figure 6.
Quantification of accuracy for the cAMP activation of PKA problem. Space-averaged Kolmogorov distances (top row), space-averaged means (middle row), and space-averaged variances (bottom row) of PKA, cAMP, and PKAc molecules over time for the cAMP activation of PKA problem are shown. The Kolmogorov distances are calculated for three pairs of samples: (1) Direct and Direct (self-distance), (2) Direct and Fluctuation-Gradient with αcAMP = 0.4, and (3) Direct and Gradient.
Figure 6 shows space-averaged Kolmogorov distances (top row), space-averaged means (middle row), and space-averaged variances (bottom row) of PKA, cAMP, and PKAc. Kolmogorov distances and variances show good agreement among different methods for PKA and PKAc. For cAMP with a nonzero diffusion coefficient, space-averaged variance is reduced when the Gradient method is used, and subsequently the space-averaged Kolmogorov distance of the Direct and Gradient pair diverges from the self-distance. The ensemble produced by the Fluctuation-Gradient (α = 0.4) method shows excellent agreement with the Direct method in terms of both variance and Kolmogorov distance, again demonstrating that the Fluctuation-Gradient method (α = 0.4) properly estimates and recovers fluctuations of diffusing species.
Figure 7 shows the cdfs of cAMP at three different subvolumes (the lower-right subvolume, the middle subvolume, and the upper-left subvolume) at three different time points: t = 20 ms, 80 ms, and 200 ms. The data at these time points and spatial locations were selected to further compare the ensemble distributions in a spatially resolved manner. The discrepancy between cdfs from the FG and Direct methods is barely noticeable, further demonstrating the ability of the FG method to accurately capture the fluctuations.
Figure 7.
Cumulative distribution functions (cdfs) of cAMP at three different subvolumes (the upper-left subvolume, the middle subvolume, and the lower-right subvolume) at three different time points: t = 20 ms, 80 ms, and 200 ms. The cdfs are calculated for three different methods: Direct, FG with αcAMP = 0.4, and Gradient.
Reaction-diffusion in 2D: cAMP activation of protein kinase A with cAMP production from adenosine triphosphate (ATP)
The third test case is a 2D reaction-diffusion problem that extends the cAMP activation of PKA problem described in Sec. 4B. The second messenger cAMP activates ion channels and protein kinases in diverse cell types.93, 94, 95 cAMP is produced from ATP by adenylyl cyclase, whose activation depends on diverse transmembrane receptors.96 Typically, cells maintain large quantities of ATP. For the purpose of this simulation, the adenylyl cyclase is assumed to be in its active form (AC*), and cAMP production is initiated by injection of ATP.
The cAMP activation of PKA with cAMP production from ATP problem is defined by
on a 5 μm × 5 μm × 1 μm volume. The simulation volume is divided into 5 × 5 × 1 subvolumes. The reaction rate constants are defined as follows: k1 = 8.696 × 10−5 nM−1s−1, k2 = 0.02 s−1, k3 = 1.154 × 10−4 nM−1s−1, k4 = 0.02 s−1, k5 = 0.016 s−1, k6 = 0.0017 nM−1s−1, k7 = 128 000 × 10−9 nM−1s−1, k8 = 0.26 s−1, k9 = 28.46 s−1, k10 = 259 200 × 10−9 nM−1s−1. The diffusion coefficient is 255 μm2s−1 for ATP and 300 μm2s−1 for cAMP; the diffusion coefficient for all other species is zero. Initially, 7500 molecules of PKA and 301 100 molecules of AC* are evenly distributed across the subvolumes. The reactions are initiated by injection of 600 000 molecules of ATP into the lower-right corner subvolume when simulation starts. Starting from this initial condition, the problem is numerically simulated for 50 ms. Each simulation ensemble consists of 500 runs. We set α = 0.4 for cAMP and α = 0 for ATP. The average concentration of ATP in cells is estimated to be 1–2 mM, which corresponds to about 106 molecules per cubic micron volume. Therefore, αATP was set to zero under the assumption that fluctuations in ATP populations would not lead to significant stochastic effects.
This reaction-diffusion problem also shows the accuracy of Fluctuation-Gradient method. Figure 8 shows the ensemble average numbers of PKA, cAMP, and the dissociated catalytic subunit of PKA (PKAc) over space at the end of the simulation interval. All ensemble means of three species show excellent qualitative agreement. Figure 9 shows space-averaged Kolmogorov distances (top row), space-averaged means (middle row), and space-averaged variances (bottom row) of species of PKA, cAMP, PKAc, and ATP. For each species, the space-averaged mean values from different methods all overlap. For species with a zero diffusion coefficient, PKA and PKAc, space-averaged variances and Kolmogorov distances also show good agreement. The Fluctuation-Gradient method was simulated with αcAMP = 0.4 and αATP = 0. For cAMP, space-averaged variances and Kolmogorov distances obtained by using the Fluctuation-Gradient method agree well with those by using the Direct method whereas the Gradient method shows reduced variance and diverging Kolmogorov distance values. For ATP with αATP set to zero, space-averaged variances and Kolmogorov distances obtained from the Fluctuation-Gradient method are completely overlapped by those from the Gradient method as expected. Both Gradient and Fluctuation-Gradient methods for ATP show reduced variance and higher Kolmogorov distance than self-distance. However, the reduced variance still allowed the variation in cAMP to be captured, while providing a rather significant speedup in simulation speed as described in Sec. 4D.
Figure 8.
Ensemble average numbers of PKA, cAMP, and PKAc molecules over space for the cAMP activation of PKA with cAMP production from ATP problem at the end of simulation, t = 50 ms. The simulation methods are, from top to bottom: Fluctuation-Gradient, Direct, and Gradient.
Figure 9.
Quantification of accuracy for the cAMP activation of PKA with cAMP production from ATP problem. Space-averaged Kolmogorov distances (top row), space-averaged means (middle row), and space-averaged variances (bottom row) of PKA, cAMP, PKAc, and ATP molecules over time are shown. The Kolmogorov distances are calculated for three pairs of samples: (1) Direct and Direct (self-distance), (2) Direct and Fluctuation-Gradient with αcAMP = 0.4 and αATP = 0, and (3) Direct and Gradient.
Simulation time-steps and simulation time
Tables 1, 2 show the total number of time-steps and the simulation time in seconds for the given simulation interval, both averaged over the ensemble, for two 2D reaction-diffusion test cases using the Fluctuation-Gradient, Direct, and Gradient methods. All three methods were implemented in JAVA, and simulations were run on an 8-processor Xeon 2.27 GHz Linux cluster. All simulations were initialized with the same set of problem description files.
Table 1.
Mean total number of time-steps and mean simulation time for the cAMP activation of PKA problem.
| Mean number of time-steps (standard deviation) | Mean simulation time in seconds (standard deviation) | |
|---|---|---|
| Direct | 19.419 × 105(9007) | 493 (66) |
| Fluctuation-Gradient (αcAMP = 0.4) | 19.244 × 105 (9047) | 440 (50) |
| Gradient | 2.015 × 105 (378) | 45 (9.2) |
Table 2.
Mean total number of time-steps and mean simulation time for the cAMP activation of PKA problem with ATP production of cAMP.
| Mean number of time-steps (standard deviation) | Mean simulation time in seconds (standard deviation) | |
|---|---|---|
| Direct | 230.568 × 105 (6663) | 3896 (304) |
| Fluctuation-Gradient (αcAMP = 0.4 and αATP = 0) | 27.646 × 105 (2079) | 320 (36) |
| Gradient | 24.636 × 105 (523) | 262 (25) |
The Fluctuation-Gradient method with αcAMP = 0.4 was used for the cAMP activation of PKA problem. In this case, the Fluctuation-Gradient method decreased the average number of time-steps only by 1% when compared with the average number taken by the Direct method. However, the Fluctuation-Gradient method with the parameter value αcAMP = 0.4 produces fluctuations that are essentially equivalent to those in the simulation ensemble by the Direct method as shown in Figs. 23. The Gradient method, as we have shown previously,68 dramatically decreased the average number of time-steps by 90% when compared with the average number by the Direct method, but with concurrently reduced fluctuations by sampling only the net macroscopic diffusion events based on concentration gradients.
The Fluctuation-Gradient method with αcAMP = 0.4 and αATP = 0 was used for the cAMP activation of the PKA problem with cAMP production from ATP. As shown in Table 2, the Fluctuation-Gradient method decreased the average number of time-steps taken and the simulation time by about 90% compared with the Direct method. This dramatic reduction compared with the slight 1% reduction due to using the parameter value αcAMP = 0.4 in the previous problem is primarily due to choosing αATP = 0, which makes the Fluctuation-Gradient method simulate diffusive transfers of ATP via the gradient-diffusion strategy. Results in Table 2 demonstrate that our improved spatial direct method with gradient-based diffusion achieves successful retention of microscopic fluctuations of diffusive species of interest—species with low to moderate population numbers—while still being able to attain a substantial speedup in simulation time.
DISCUSSION
We have presented the improved spatial direct method based on gradient-diffusion that is able to retain the same degree of fluctuations in diffusing species as the exact algorithm. We have applied the maximum caliber principle71 to enumerate the full probability distribution of microtrajectories that underlie the time-dependent macroscopic trajectory of diffusing species, and used this probability distribution in a computationally efficient manner to estimate and recover fluctuations.
Our new algorithm makes two contributions. First, based on the maximum caliber principle,71 it provides a computationally efficient way to derive and retain microscopic fluctuations that are consistent with the macroscopically predicted values in the system under consideration. Second, with the introduction of parameter α in Eq. 12 for each diffusing species, it allows for controlling the amount of microscopic fluctuations that are of interest while ignoring fluctuations that do not affect downstream reactions in return for a substantial gain in simulation speed.
With the parameter α unique for each diffusing species, our algorithm provides a mechanism to choose diffusing species whose fluctuations are of interest versus those whose fluctuations are not. Setting α = 0.4 for our set of example systems has produced results that are almost indistinguishable from the results obtained from the Direct method. Nonetheless, there may be cases where setting α = 0.4 for a diffusing species in a given system fails to recover fluctuations. For example, when the condition ( – ) ≤ α · ( + ) in Eq. 12 is constantly violated throughout the simulation interval for all neighboring subvolumes, the diffusion probability is calculated by the Gradient method and the fluctuations of Si will be lost. For the above condition to be violated with α = 0.4, must be less than ∼42% of . Although this situation may occur transiently during a simulation when the concentration gradient is particularly large, we consider it to be unlikely that such difference in populations of a diffusing species persists between two adjacent subvolumes throughout the simulation interval. Thus, we believe that a value of α = 0.4 will work for most systems. Diffusing species with large populations have fluctuations that are small compared to their mean value. These fluctuations are unlikely to produce significant changes in downstream reactions, as observed in Fig. 9 in which the Fluctuation-Gradient method, despite the reduced fluctuations in ATP due to using αATP = 0, produced the simulation ensemble that retained the same degree of fluctuations in cAMP as the Direct method. When retaining spatial fluctuations of all diffusing species in the system over the entire simulation interval is desired, an exact algorithm is more appropriate. In such cases, our method easily transitions to an exact method by setting all α-values to 1.
With the advent of new experimental techniques that allow measurement of cellular kinetics data at greater detail,97, 98, 99 the size of the system to be studied with the SSA and its spatial extension has been growing, with a consequent increase in computational requirement. Our algorithm offers a computationally efficient alternative to an exact algorithm for simulating large signaling pathways with the ability to retain the full molecular fluctuations of diffusing species that are inherent in the probability distribution of all possible microtrajectories.
ACKNOWLEDGMENTS
This work was supported by the CRCNS program (NSF and NIH) through NIH Grant Nos. RO1 AA18066 and AA16022 to K.T.B.
References
- Bloodgood B. L., Giessel A. J., and Sabatini B. L., PLoS Biol. 7, e1000190 (2009). 10.1371/journal.pbio.1000190 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gasparini S., Losonczy A., Chen X., Johnston D., and Magee J. C., J. Physiol. (London) 580, 787 (2007). 10.1113/jphysiol.2006.121343 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glade N., Demongeot J., and Tabony J., BMC Cell Biology 5, 23 (2004). 10.1186/1471-2121-5-23 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harold F. M., Microbiol. Mol. Biol. Rev. 69, 544 (2005). 10.1128/MMBR.69.4.544-564.2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harvey C. D., Yasuda R., Zhong H., and Svoboda K., Science 321, 136 (2008). 10.1126/science.1159675 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Howard M. and Rutenberg A. D., Phys. Rev. Lett. 90, 128102 (2003). 10.1103/PhysRevLett.90.128102 [DOI] [PubMed] [Google Scholar]
- Lee S.-J. R., Escobedo-Lozoya Y., Szatmari E. M., and Yasuda R., Nature (London) 458, 299 (2009). 10.1038/nature07842 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Majewska A., Brown E., Ross J., and Yuste R., J. Neurosci. 20, 1722 (2000). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mayford M., Curr. Opin. Neurobiol. 17, 313 (2007). 10.1016/j.conb.2007.05.001 [DOI] [PubMed] [Google Scholar]
- Nicolis G. and Prigogine I., Proc. Natl. Acad. Sci. U.S.A. 68, 2102 (1971). 10.1073/pnas.68.9.2102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rashevsky N., Bull. Math. Biol. 2, 15 (1940). 10.1007/BF02478028 [DOI] [Google Scholar]
- Shapiro L. and Losick R., Cell 100, 89 (2000). 10.1016/S0092-8674(00)81686-4 [DOI] [PubMed] [Google Scholar]
- Turing A. M., Philos. Trans. R. Soc. London, Ser. B 237, 37 (1952). 10.1098/rstb.1952.0012 [DOI] [Google Scholar]
- Zhong H., Sia G.-M., Sato T. R., Gray N. W., Mao T., Khuchua Z., Huganir R. L., and Svoboda K., Neuron 62, 363 (2009). 10.1016/j.neuron.2009.03.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blake W. J., Kaern M., Cantor C. R., and Collins J. J., Nature (London) 422, 633 (2003). 10.1038/nature01546 [DOI] [PubMed] [Google Scholar]
- Elowitz M. B., Levine A. J., Siggia E. D., and Swain P. S., Science 297, 1183 (2002). 10.1126/science.1070919 [DOI] [PubMed] [Google Scholar]
- Longo D. and Hasty J., Mol. Syst. Biol. 2, 64 (2006). 10.1038/msb4100110 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McAdams H. H. and Arkin A., Trends Genet. 15, 65 (1999). 10.1016/S0168-9525(98)01659-X [DOI] [PubMed] [Google Scholar]
- Ozbudak E. M., Thattai M., Kurtser I., Grossman A. D., and van Oudenaarden A., Nat. Genet. 31, 69 (2002). 10.1038/ng869 [DOI] [PubMed] [Google Scholar]
- Paulsson J., Phys. Life Rev. 2, 157 (2005). 10.1016/j.plrev.2005.03.003 [DOI] [Google Scholar]
- Raser J. M. and O’Shea E. K., Science 309, 2010 (2005). 10.1126/science.1105891 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shahrezaei V. and Swain P. S., Curr. Opin. Biotechnol. 19, 369 (2008). 10.1016/j.copbio.2008.06.011 [DOI] [PubMed] [Google Scholar]
- Gillespie D. T., J. Comput. Phys. 22, 403 (1976). 10.1016/0021-9991(76)90041-3 [DOI] [Google Scholar]
- Gillespie D. T., J. Phys. Chem. 81, 2340 (1977). 10.1021/j100540a008 [DOI] [Google Scholar]
- Elf J. and Ehrenberg M., Syst. Biol. 1, 230 (2004). 10.1049/sb:20045021 [DOI] [PubMed] [Google Scholar]
- Erban R. and Chapman S. J., Phys. Biol. 4, 16 (2007). 10.1088/1478-3975/4/1/003 [DOI] [PubMed] [Google Scholar]
- Isaacson S. A. and Peskin C. S., SIAM J. Sci. Comput. (USA) 28, 47 (2006). 10.1137/040605060 [DOI] [Google Scholar]
- McQuarrie D. A., J. Appl. Probab. 4, 413 (1967). 10.2307/3212214 [DOI] [Google Scholar]
- Qu Z., Garfinkel A., Weiss J. N., and Nivala M., Prog. Biophys. Mol. Biol. 107, 21 (2011). 10.1016/j.pbiomolbio.2011.06.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Andrews S. S., Addy N. J., Brent R., and Arkin A. P., PLOS Comput. Biol. 6, e1000705 (2010). 10.1371/journal.pcbi.1000705 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Andrews S. S. and Bray D., Phys. Biol. 1, 137 (2004). 10.1088/1478-3967/1/3/001 [DOI] [PubMed] [Google Scholar]
- Byrne M., Waxham M., and Kubota Y., Neuroinformatics 8, 63 (2010). 10.1007/s12021-010-9066-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kerr R. A., Bartol T. M., Kaminsky B., Dittrich M., Chang J.-C. J., Baden S. B., Sejnowski T. J., and Stiles J. R., SIAM J. Sci. Comput. (USA) 30, 3126 (2008). 10.1137/070692017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morton-Firth C. J., Shimizu T. S., and Bray D., J. Mol. Biol. 286, 1059 (1999). 10.1006/jmbi.1999.2535 [DOI] [PubMed] [Google Scholar]
- Ander M., Beltrao P., Di Ventura B., Ferkinghoff-Borg J. A. F.-B. J., Foglierini M. A. F. M., Kaplan A. A. K. A., Lemerle C. A. L. C., Tomas-Oliveira I. A. T.-O. I., and Serrano L. A. S. L., Syst. Biol. 1, 129 (2004). 10.1049/sb:20045017 [DOI] [PubMed] [Google Scholar]
- Chatterjee A. and Vlachos D., J. Comput.-Aided Mater. Des. 14, 253 (2007). 10.1007/s10820-006-9042-9 [DOI] [Google Scholar]
- Baras F. and Mansour M. M., Phys. Rev. E 54, 6139 (1996). 10.1103/PhysRevE.54.6139 [DOI] [PubMed] [Google Scholar]
- Gardiner C. W., Stochastic Methods: A Handbook for the Natural and Social Sciences, 4th ed. (Springer, Berlin, 2009). [Google Scholar]
- van Kampen N. G., Stochastic Processes in Physics and Chemistry, 3rd ed. (Elsevier, Amsterdam, 2007). [Google Scholar]
- Gillespie D. T., Physica A 188, 404 (1992). 10.1016/0378-4371(92)90283-V [DOI] [Google Scholar]
- Erban R. and Chapman S. J., Phys. Biol. 6, 046001 (2009). 10.1088/1478-3975/6/4/046001 [DOI] [PubMed] [Google Scholar]
- Isaacson S., SIAM J. Appl. Math. 70, 77 (2009). 10.1137/070705039 [DOI] [Google Scholar]
- Isaacson S. A. and Isaacson D., Phys. Rev. E 80, 066106 (2009). 10.1103/PhysRevE.80.066106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sjoberg P., Berg O. G., and Elf J., e-print arXiv:0905.4629v1[q-bio.QM].
- Bernstein D., Phys. Rev. E 71, 041103 (2005). 10.1103/PhysRevE.71.041103 [DOI] [PubMed] [Google Scholar]
- Cao Y., Li H., and Petzold L., J. Chem. Phys. 121, 4059 (2004). 10.1063/1.1778376 [DOI] [PubMed] [Google Scholar]
- Gibson M. A. and Bruck J., J. Phys. Chem. A 104, 1876 (2000). 10.1021/jp993732q [DOI] [Google Scholar]
- Gillespie D. T., Annu. Rev. Phys. Chem. 58, 35 (2007). 10.1146/annurev.physchem.58.032806.104637 [DOI] [PubMed] [Google Scholar]
- Lok L. and Brent R., Nat. Biotechnol. 23, 131 (2005). 10.1038/nbt1054 [DOI] [PubMed] [Google Scholar]
- McCollum J. M., Peterson G. D., Cox C. D., Simpson M. L., and Samatova N. F., Comput. Biol. Chem. 30, 39 (2006). 10.1016/j.compbiolchem.2005.10.007 [DOI] [PubMed] [Google Scholar]
- Bayati B., Chatelain P., and Koumoutsakos P., J. Comput. Phys. 228, 5908 (2009). 10.1016/j.jcp.2009.05.004 [DOI] [Google Scholar]
- Cao Y., Gillespie D. T., and Petzold L. R., J. Chem. Phys. 124, 044109 (2006). 10.1063/1.2159468 [DOI] [PubMed] [Google Scholar]
- Chatterjee A., Mayawala K., Edwards J. S., and Vlachos D. G., Bioinformatics 21, 2136 (2005). 10.1093/bioinformatics/bti308 [DOI] [PubMed] [Google Scholar]
- Gillespie D. T., J. Chem. Phys. 115, 1716 (2001). 10.1063/1.1378322 [DOI] [Google Scholar]
- Gillespie D. T. and Petzold L. R., J. Chem. Phys. 119, 8229 (2003). 10.1063/1.1613254 [DOI] [Google Scholar]
- Harris L. A. and Clancy P., J. Chem. Phys. 125, 144107 (2006). 10.1063/1.2354085 [DOI] [PubMed] [Google Scholar]
- Haseltine E. L. and Rawlings J. B., J. Chem. Phys. 117, 6959 (2002). 10.1063/1.1505860 [DOI] [Google Scholar]
- Rao C. V. and Arkin A. P., J. Chem. Phys. 118, 4999 (2003). 10.1063/1.1545446 [DOI] [Google Scholar]
- Rathinam M. and El Samad H., J. Comput. Phys. 224, 897 (2007). 10.1016/j.jcp.2006.10.034 [DOI] [Google Scholar]
- Tian T. and Burrage K., J. Chem. Phys. 121, 10356 (2004). 10.1063/1.1810475 [DOI] [PubMed] [Google Scholar]
- Engblom S., Ferm L., Hellander A., and Lotstedt P., SIAM J. Sci. Comput. (USA) 31, 1774 (2009). 10.1137/080721388 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rossinelli D., Bayati B., and Koumoutsakos P., Chem. Phys. Lett. 451, 136 (2008). 10.1016/j.cplett.2007.11.055 [DOI] [Google Scholar]
- Lampoudi S., Gillespie D. T., and Petzold L. R., J. Chem. Phys. 130, 094104 (2009). 10.1063/1.3074302 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oliveira R. F., Terrin A., Di Benedetto G., Cannon R. C., Koh W., Kim M., Zaccolo M., and Blackwell K. T., PLoS ONE 5, e11725 (2010). 10.1371/journal.pone.0011725 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rodriguez J. V., Kaandorp J. A., Dobrzynski M., and Blom J. G., Bioinformatics 22, 1895 (2006). 10.1093/bioinformatics/btl271 [DOI] [PubMed] [Google Scholar]
- Fange D., Berg O. G., Sjöberg P., and Elf J., Proc. Natl. Acad. Sci. U.S.A. 107, 19820 (2010). 10.1073/pnas.1006565107 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Iyengar K. A., Harris L. A., and Clancy P., J. Chem. Phys. 132, 094101 (2010). 10.1063/1.3310808 [DOI] [PubMed] [Google Scholar]
- Koh W. and Blackwell K. T., J. Chem. Phys. 134, 154103 (2011). 10.1063/1.3572335 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haseltine E. L. and Rawlings J. B., J. Chem. Phys. 123, 164115 (2005). 10.1063/1.2062048 [DOI] [PubMed] [Google Scholar]
- Rathinam M., Petzold L. R., Cao Y., and Gillespie D. T., J. Chem. Phys. 119, 12784 (2003). 10.1063/1.1627296 [DOI] [Google Scholar]
- Jaynes E. T., Annu. Rev. Phys. Chem. 31, 579 (1980). 10.1146/annurev.pc.31.100180.003051 [DOI] [Google Scholar]
- Hanusse P. and Blanche A., J. Chem. Phys. 74, 6148 (1981). 10.1063/1.441005 [DOI] [Google Scholar]
- Stundzia A. B. and Lumsden C. J., J. Comput. Phys. 127, 196 (1996). 10.1006/jcph.1996.0168 [DOI] [Google Scholar]
- Dewar R. C., in Non-Equilibrium Thermodynamics and the Production of Entropy: Life, Earth, and Beyond, edited by Kleidon A. and Lorenz R. D. (Springer Verlag, Heidelberg, Germany, 2005), pp. 41. [Google Scholar]
- Jaynes E. T., Phys. Rev. 108, 171 (1957). 10.1103/PhysRev.108.171 [DOI] [Google Scholar]
- Jaynes E. T., Phys. Rev. 106, 620 (1957). 10.1103/PhysRev.106.620 [DOI] [Google Scholar]
- Ghosh K., Dill K. A., Inamdar M. M., Seitaridou E., and Phillips R., Am. J. Phys. 74, 123 (2006). 10.1119/1.2142789 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stock G., Ghosh K., and Dill K. A., J. Chem. Phys. 128, 194102 (2008). 10.1063/1.2918345 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Otten M. and Stock G., J. Chem. Phys. 133, 034119 (2010). 10.1063/1.3455333 [DOI] [PubMed] [Google Scholar]
- Gillespie D. T., J. Chem. Phys. 113, 297 (2000). 10.1063/1.481811 [DOI] [Google Scholar]
- Pressé S., Ghosh K., Phillips R., and Dill K. A., Phys. Rev. E 82, 031905 (2010). 10.1103/PhysRevE.82.031905 [DOI] [PubMed] [Google Scholar]
- Crooks G. E., Phys. Rev. E 60, 2721 (1999). 10.1103/PhysRevE.60.2721 [DOI] [PubMed] [Google Scholar]
- Sevick E. M., Prabhakar R., Williams S. R., and Searles D. J., Annu. Rev. Phys. Chem. 59, 603 (2008). 10.1146/annurev.physchem.58.032806.104555 [DOI] [PubMed] [Google Scholar]
- Wang G. M., Sevick E. M., Mittag E., Searles D. J., and Evans D. J., Phys. Rev. Lett. 89, 050601 (2002). 10.1103/PhysRevLett.89.050601 [DOI] [PubMed] [Google Scholar]
- Press W. H., Flannery B. P., Teukolsky S. A., and Vetterling W. T., Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. (Cambridge University Press, Cambridge, 1992). [Google Scholar]
- Devore J. L., Probability and Statistics for Engineering and the Sciences, 4th ed. (Brooks/Cole, Pacific Grove, CA, 1995). [Google Scholar]
- Fano U., Phys. Rev. 72, 26 (1947). 10.1103/PhysRev.72.26 [DOI] [Google Scholar]
- Hoel P. G., Ann. Math. Stat. 14, 155 (1943). 10.1214/aoms/1177731457 [DOI] [Google Scholar]
- Seitaridou E., Inamdar M. M., Phillips R., Ghosh K., and Dill K., J. Phys. Chem. B 111, 2288 (2007). 10.1021/jp067036j [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cao Y. and Petzold L., J. Comput. Phys. 212, 6 (2006). 10.1016/j.jcp.2005.06.012 [DOI] [Google Scholar]
- Das R., Esposito V., Abu-Abed M., Anand G. S., Taylor S. S., and Melacini G., Proc. Natl. Acad. Sci. U.S.A. 104, 93 (2007). 10.1073/pnas.0609033103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim C., Cheng C. Y., Saldanha S. A., and Taylor S. S., Cell 130, 1032 (2007). 10.1016/j.cell.2007.07.018 [DOI] [PubMed] [Google Scholar]
- Biel M., J. Biol. Chem. 284, 9017 (2009). 10.1074/jbc.R800075200 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Constantin S. and Wray S., Endocrinology 149, 3500 (2008). 10.1210/en.2007-1508 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sands W. A. and Palmer T. M., Cell Signal 20, 460 (2008). 10.1016/j.cellsig.2007.10.005 [DOI] [PubMed] [Google Scholar]
- Patel T. B., Du Z., Pierre S., Cartin L., and Scholich K., Gene 269, 13 (2001). 10.1016/S0378-1119(01)00448-6 [DOI] [PubMed] [Google Scholar]
- Giepmans B. N. G., Adams S. R., Ellisman M. H., and Tsien R. Y., Science 312, 217 (2006). 10.1126/science.1124618 [DOI] [PubMed] [Google Scholar]
- Lidke D. S. and Wilson B. S., Trends Cell Biol. 19, 566 (2009). 10.1016/j.tcb.2009.08.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sung M.-H. and McNally J. G., Wiley Interdisciplinary Reviews: Systems Biology and Medicine (Wiley & Sons, Hoboken, NJ, 2011), Vol. 3, p. 167. [DOI] [PMC free article] [PubMed] [Google Scholar]









