Abstract
Many membrane channels and receptors exhibit adaptive, or desensitized, response to a strong sustained input stimulus. A key mechanism that underlies this response is the slow, activity-dependent removal of responding molecules to a pool which is unavailable to respond immediately to the input. This mechanism is implemented in different ways in various biological systems and has traditionally been studied separately for each. Here we highlight the common aspects of this principle, shared by many biological systems, and suggest a unifying theoretical framework. We study theoretically a class of models which describes the general mechanism and allows us to distinguish its universal from system-specific features. We show that under general conditions, regardless of the details of kinetics, molecule availability encodes an averaging over past activity and feeds back multiplicatively on the system output. The kinetics of recovery from unavailability determines the effective memory kernel inside the feedback branch, giving rise to a variety of system-specific forms of adaptive response—precise or input-dependent, exponential or power-law—as special cases of the same model.
Keywords: adaptation, feedback, signal-processing, biochemical networks
Many sensing molecules, such as membrane channels and receptors, have mechanisms of activity attenuation following exposure to strong, persistent stimulation. Sometimes termed “adaptation” or “desensitization,” the quantitative hallmark of these responses is that an abrupt change in stimulus elicits a strong rapid rise in activity followed by a slower relaxation to steady state. Such responses have been studied extensively in the context of sensory systems (1) as well as cellular signaling systems (2, 3). They are thought to reflect the continuous need of a sensory system to adjust to changing external conditions while coping with limited resources and suggest connections to such concepts as homeostasis (4) and feedback control (5).
A widely encountered mechanism underlying adaptive response is the slow activity-dependent modulation in the total number of molecules available to respond. This is a well-known phenomenon that characterizes a large class of biological systems and can be implemented physically in many ways; Fig. 1 illustrates voltage-gated ion channels (6, 7), bacterial chemotactic (8, 9) and G protein-coupled receptors (10) as typical examples. These receptors and channels can all become temporarily unavailable to respond to the external signal via either a change of protein conformation that blocks the channel pore (ion channels), covalent modifications (chemotactic receptors), or their physical removal from the cell surface (internalization of GPCR or trafficking in some synaptic receptors) (11). In all these examples transitions to the unavailable state are strongly dependent on the activity state of the molecule (e.g., primarily through open channels or through ligand-bound receptors). Despite these apparent common principles, differences in morphology and context have traditionally led researchers to study these systems separately. In the present paper we highlight the common principles in the framework of a theoretical model. We study a single prototypic model unifying multiple systems and show how some of these examples emerge as special cases of the general model.
Fig. 1.

(A) Physical implementations of protein availability/unavailability transitions. Top: Voltage-gated ion channels have two functional states, conducting and non-conducting, and can rapidly switch between them in a voltage-dependent manner. In addition they can switch from the active state to non-conducting conformations in which they are unavailable to respond to voltage on the same timescale (7). Middle: Chemotactic receptors must be methylated to be available to transmit the response from the ligand to downstream processes. Demethylation, i.e. the transition to unavailability, acts only on the active receptor (8). Bottom: G protein-coupled receptors can be internalized, and thus become unavailable for binding, by a process which is sensitive to their binding state (10). (B) General 3-state kinetic scheme providing an abstract model for state-dependent (and therefore activity-dependent) inactivation. Kinetics of recovery from inactivation, denoted by the abstract term Δ, represents generally different kinetics for each physical implementation.
Viewing the ensemble of membrane proteins as a sensory envelope of the cell and the number of active proteins as the output of this sub-system, it is intuitively understood that this mechanism can induce some form of feedback. If transitions to unavailability occur through the active state, then the higher the occupancy of this state, (i.e., the higher the output), the larger fraction of molecules will become unavailable (4). The unavailable population will thus act as a buffer to register the system's past output and to induce effective feedback. Despite this understanding, many questions remain unclear: Are all the examples depicted in Fig. 1 quantitatively identical? While clearly sharing a common principle, they do differ in the details of kinetics and timescales. Thus one would like to characterize more precisely, what are the universal features common to all of them and what features are system-specific? In term of reverse engineering, what control circuit best describes the dynamical system representing this mechanism?
In order to answer these questions, we formulate a general mathematical model for activity-dependent inactivation. Special cases of this model have been introduced, and their relation to adaptive response was discussed for specific biological systems [voltage-gated ion channels (7); bacterial chemotaxis (8)]. While admittedly oversimplified, it captures faithfully the essence of the phenomenon. Our theoretical analysis allows a solution in the appropriate approximation and a precise mapping of the dynamic behavior onto a control circuit diagram. This enables us to identify the universal aspects of seemingly different biological systems and dynamic behaviors: the availability variable averages over past activity, obeys a bilinear control equation (12), and implements a multiplicative feedback circuit regardless of the details of its kinetics. On the other hand, the kinetics of recovery of unavailable molecules back to the available pool determines the averaging kernel within the feedback branch and is a crucial ingredient in determining the adaptive dynamics and timescales. We find as special cases of the same general model exponential and power-law, exact and input-dependent adaptive responses.
Results
Model Construction, Timescale Separation and Universality of Multiplicative Feedback.
Consider an ensemble of non-interacting molecules responding to an input signal in the following way: Each molecule can occupy one of two states, active or inactive, with transition rates between these two states, α and β, depending on the input u(t). This situation is typical of biological sensing molecules; as specific examples, one can imagine the activation of membrane receptors by binding and unbinding of external ligands or the opening and closing of ion channels in response to membrane voltage. The ensemble of input-responding molecules defines an interface of the cell with the external environment and conveys information about this environment to further downstream processes. In particular it can report this information by the concentration of active molecules x(t), which can be sensed by internal cellular mechanisms. We define this as the output of the interface sub-system; for constant input stimulus u, the output is given as an input-dependent fraction of the total concentration of available molecules A by x(u) = p(u)A = α(u)/(α(u) + β(u))A.
Adaptive response and desensitization arise when, in addition to the input-dependent states considered above, the molecules can become temporarily unavailable to respond to the input by a process which is sensitive to the input-dependent state occupancy (active/inactive). The transitions to and from availability are often, to a good approximation, independent of the input signal and depend strongly on the active/inactive occupancy; in the extreme case they occur exclusively from either the active or inactive state. For transitions exclusively from the active state, these simplifications imply the following kinetic scheme (7):
![]() |
We write the differential equations governing the dynamics in two variables, the activity x and the availability A:
![]() |
Recovery from unavailability is denoted by the abstract term Δ, which in general can depend on the unavailable population and also on time. It will be shown below that this recovery term determines the kernel of feedback and thus the type of adaptation in the system. However, regardless of the form of Δ, we note that transitions to unavailability are generally slower than the active/inactive transitions. Relying on this separation of timescales between the input-dependent and input-independent transitions, these equations can be solved in the adiabatic approximation. To leading order the slow variable A is constant and the fast variable is at equilibrium:
The leading order dynamics of A is obtained by inserting this solution into the second equation of Eq. 2:
Since Δ can be an arbitrary function of A or of time, but not of the input signal u(t), this equation describes a generalized bilinear control system [see supporting information (SI) Appendix] (12). The input signal, or rather an instantaneous nonlinear function of it, appears in a product term with the variable A, both in first power. In principle after solving Eq. 4 one should insert the solution back to Eq. 3, showing that the slow variable, appropriately averaged over the fast one, appears in a product term with the instantaneous input/output relation p(u(t)) to give the system output.
These properties are universal to the entire class of models considered here, regardless of the form of recovery kinetics Δ. Therefore they are common to all systems depicted in Fig. 1 and many similar ones. The linearity of the input-dependent transitions in Eq. 2 is not an essential feature of the model; the only requirement is that the responding molecules equilibrate rapidly to an input-dependent proportion p(u(t)) and that transitions to unavailability occur slowly and exclusively through the active state. A parallel construction allowing transitions to unavailability only through the inactive state leads to universal behavior for positive feedback (see Appendix: Positive Feedback).
The system-specific features of this class of models are determined by the kinetics of recovery from unavailability, as represented by Δ. In the following we discuss three cases: first-order recovery with conservation of the total number of molecules in the system; zero-order recovery, where the system is replenished with a new supply of available molecules at a constant rate; and history-dependent recovery, where the escape from the unavailable state is characterized by a broad distribution of timescales. We show that they result in stimulus-dependent exponential adaptation, exact (stimulus-independent) exponential adaptation, and power-law adaptation, respectively.
Recovery Kinetics and System-Specific Adaptive Response.
Stimulus-dependent exponential adaptation.
If the total number of molecules is conserved and the recovery term from unavailability is first-order, then Δ = δ (1 − A) (using normalized total concentration units). This model has been studied to describe inactivation of voltage-gated ion channels (6, 7). The equation for A is then linear, with a solution
where Γ(t 1,t 2) = ∫t1 t2 dt′/τs(t′) and τs(t′)−1 = δ + γp(u(t′)) is the slow (stimulus-dependent) timescale of the system, and p(u(t)) is defined as above. Eqs. 3 and 5 then provide the leading-order solution for a general input signal u(t) under the assumption of timescale separation (see SI Appendix for numerical tests of the approximation in this case). For constant u it coincides with the exact solution to give
![]() |
showing that the steady state after adaptation reflects the input stimulus value. In response to a step function at t = 0 changing from u 1 to u 2, the available population A relaxes smoothly to a new steady state value, while the activity x shows a sharp transient followed by an adaptive relaxation to steady state:
![]() |
where p 1(2) = p(u 1(2)) and τ2 −1 = τs −1(p 2). Fig. 2 shows an example of the response in terms of A(t) and x(t) to a step function input. It is interesting to note that the response of this model to increasing and decreasing step input stimuli is asymmetric, as often observed in sensory systems. This property stems simply from the fact that the exponential decay is characterized by a timescale which depends on the local stimulus value, p 1 = p(u 1) and p 2 = p(u 2) in Eq. 7, which in turn is an increasing function of the input stimulus.
Fig. 2.
Exponential adaptive responses to increasing (left) or decreasing (right) step input. A(t) - availability, (red dashed lines) and x(t) - system output (black solid lines) are illustrated for two cases discussed in text. (A and B) Input signals u(t). Parameter values: p 1 = 0.33, p 2 = 0.79. (C and D) Input-dependent exponential adaptation with γ = 0.4, Δ = 0.2(1 − A); (E and F) exact exponential adaptation with γ = 4, Δ = 0.5.
To better understand the origin of the adaptive response, the role of the availability variable A and its relation to system output x, it is helpful instead of solving the equations to integrate directly the second equation of Eq. 2 for this case:
In the absence of output (x(t) = 0), availability A will relax to its steady-state value of 1. Otherwise, it encodes in it the history of past output x(t′) for t′ < t, filtered by an exponential with a memory time determined by the recovery rate δ. This illustrates explicitly how the system output x is a function of its past history: for t >> 1/δ, in the adiabatic approximation,
revealing here the specific form of the multiplicative feedback: The input stimulus u(t) enters through the instantaneous response function p(u), whereas this response function is multiplied by a scale factor, A(t), which encodes an integral over the output history. This result can be presented graphically as a circuit diagram shown in Fig. 3. The input signal u(t) goes through an instantaneous, generally nonlinear response p(u(t)); this is then multiplied by the availability variable A, which is obtained from the system output x(t) to create the feedback branch with an integration over past output. We emphasize that although we have illustrated the response to a step input stimulus, Eqs. 8 and 9 hold in the adiabatic approximation for any time-dependent input u(t); it is this generality with respect to the input which enables one to map the dynamical system solution onto a circuit diagram.
Fig. 3.
Schematic diagram of circuit universally implemented by activity-dependent inactivation. The input stimulus u(t) goes through an instantaneous response function p(u(t)), generally nonlinear and increasing. The output of this function multiplies the feedback branch, which is an average over output past history with an appropriate kernel. System-specific properties are reflected by different integral operators in the feedback branch and possibly addition of initial conditions or target values.
The adaptive response described by this case is often seen in sensory systems. For example, neural firing rate generally responds strongly to abrupt changes in the input stimulus, and then relaxes to a steady state; however, the steady state response is also sensitive to the (constant) input stimulus. This type of response provides the possibility of coding slowly varying stimuli while at the same time maintaining sensitivity to transients.
Exact exponential adaptation.
In some cases of interest, the recovery Δ proceeds at a constant rate. This happens if unavailability is implemented by actual degradation of molecules, which are then replenished at a constant rate (13), or if recovery is catalyzed by an enzyme of extremely low and approximately constant concentration (8). It will be shown below that in the framework of our general model, such recovery kinetics result in exact exponential adaptation, i.e., relaxation to a steady-state which is input independent.
Exact adaptation was studied extensively in the context of bacterial chemotaxis, where it was observed experimentally. Theoretically there are different mechanisms that can lead to an exact adaptive response (see SI Appendix). The idea of activity-dependent kinetics as a mechanism for implementing effective feedback in chemotaxis was highlighted in ref. 8 and implemented in a model of the chemotactic receptor. In its simplified version, the model consists of a receptor that can occupy a bound or unbound state and independently a modified (methylated) and unmodified state. Only methylated receptors can become active, defining them as the available population; while the demethylation enzyme acts only on the active receptor, providing the exclusively state-dependent transition to unavailability. Assuming that the reverse reaction is zero-order (catalyzed by a rare enzyme), this model is equivalent to Eq. 2 with a zero-order recovery from unavailability, i.e. Δ = δ. The explicit solutions of these equations are Eqs. 3 and 5, with the slightly different definition τs(t′)−1 = γp(u(t′)). The functional nature of the system is, however, different: Its steady-state solutions are independent of the input stimulus. One easily finds
![]() |
showing that exact adaptation arises because the adaptive variable, the availability A(t), representing the concentration of receptors available to respond to ligand, exactly compensates the input dependence of x for any value of u. This property in turn follows from the zero-order kinetics of the recovery from unavailability. In response to a step-function input from u 1 to u 2, once again A(t) relaxes smoothly to its new steady state while x(t) displays exact adaptation:
![]() |
where p 1(2) = p(u 1(2)) and τ2 −1 = τs −1(p 2). As before, the exponential decay is asymmetric, with a faster decay for an increasing step, as observed in experiments (14) and in more elaborate models of chemotaxis. These properties are illustrated in Fig. 2 E and F.
To map the dynamical system to a control circuit, we write the integral equation relating the activity and availability, in analogy to Eq. 8:
Here the availability variable encodes an error signal: the integrated deviation of output from its steady state value δ/γ, with the integral having infinite memory (no decreasing kernel). Previous work has found that models of bacterial chemotaxis include integral control by identifying the relation Ȧ = x (15). Here we can go beyond this relation to the full and explicit expression of output in terms of the system past output
This enables us also to draw the entire equivalent circuit, which is a special case of the universal circuit Fig. 3 with unit integration kernel. Note that in this case also an additional input in the form of A(0) is required, the initial condition on A, because the memory in this system is infinite and the initial conditions do not decay.
In closing this section we note that the bacterial chemotactic system is at the focus of much theoretical research (for a review, see ref. 9). Exact adaptation is only one of its characteristics; others include a remarkable combination of sensitivity and dynamic range of response. Our simplified model is designed to describe the essence of state-dependent inactivation and adaptive response within a comparative framework and does not necessarily reproduce other properties of particular systems. We show in the SI Appendix how the concept of availability, here modeled as a binary state (available/unavailable), can be extended to a cascade of states with graded degree of availability. This extension, corresponding to multiple methylation states of chemotactic receptors, endows the model with a wide dynamic range of response as seen in reality.
Power-law adaptation
While a three-state model can often be adequate to approximate state-dependent inactivation, this is not always the case. The unavailable “state” in reality often represents a more complex set of states that are degenerate with respect to their functionality. In some types of ion channels, for example, prolonged experiments have revealed a broad range of timescales characterizing the recovery of channels from unavailability, suggesting the picture of a complex internal structure to the unavailable manifold of states (7, 16). The activity-dependent regulation of AMPA receptors in a post-synaptic neuron, as another example, involves several processes by which the receptor can move between availability and unavailability at the synapse, such as lateral diffusion and receptor internalization (11). The recovery from several different states with different timescales will generally display nonexponential relaxations (17).
Nonexponential adaptive responses have been observed and studied from a signal-processing point of view (18–20). This phenomenon is expected to be related to nonexponential relaxation processes at the molecular level. The relationship between these two levels has been investigated in the context of particular models, for example for ion channel kinetics describing an internal structure of the unavailable manifold (21, 22). In this context an effective nonlinear low-dimensional model was suggested to account for molecular memory and adaptive response (23). In an alternative approach the complex internal state-space structure can be represented effectively by a single state with non-Markovian kinetics (24, 25).
We here use this latter approach to show how power-law adaptive response arises as an additional special case of our general model with non-Markovian recovery from unavailability. We characterize the unavailable manifold by a non-exponential residence time distribution (RTD), ψ(t), without specifying the manifold's internal structure that gives rise to this distribution. This provides a different special case of the recovery term Δ, leaving the basic structure of state-dependent inactivation the same as in previous cases. The unavailable pool still serves as a register of past output in this case, but with longer memory.
The dynamics of the system is described by Eq. 2 with Δ = ∫0 t K(t − t′)(1 − A(t′))dt′, reflecting the dependence of recovery kinetics not only on the current value of A(t) but also on its history. The memory kernel K(t) is related to ψ(t) through their Laplace transforms (24)
![]() |
K(t) thus accounts for general kinetics of return from the unavailable manifold; the special case of Markovian kinetics is recovered with K(t) = δ · δ(t), with ψ(t) = δexp(−δt) [δ(t) is Dirac's delta function].
Under the conditions of timescale separation, to leading order the population of molecules in the active state, x, still follows instantaneously the availability variable A: x(t) ≈ p(u(t)) · A(t). Substituting this solution into the slower equation should in principle provide the solution for A in the leading order. However, in the general non-Markovian case the resulting equation is not solvable for a general time-dependent input signal. Focusing on the response to a step stimulus from u 1 to u 2, we find the solution by Laplace transform:
![]() |
where Ã(s) is the Laplace transform of A(t) and A(0) its initial condition in the time domain. For Markovian kinetics, (s) = δ, and an inverse-Laplace transform of Eq. 15 reduces to Eq. 8.
To illustrate the effect of a nonexponential RTD, we analyzed a special case in which the tail of the distribution is a power-law (see SI Appendix). The results of the step response are depicted in Fig. 4, showing that the adaptive response is indeed non exponential and depends on the details of the RTD. The steady state value in this case is independent of the input value (see SI Appendix for details). This is a nonexponential form of exact adaptation, with the time to reach steady state generally much longer and with no characteristic timescale. We remark that the exponential exact adaptation that stems from zero-order kinetics, is not encompassed in the general description developed in this section, because the residence time distribution ψ(t) is not well-defined.
Fig. 4.
(A) Generalization of 3-state kinetic scheme for multiple unavailable states. (B) Power-law adaptive response to an increasing step input stimulus, derived for a residence time distribution with a power-law tail. The parameter ν varies from ν = 1, representing the purely exponential case, to ν < 1 with a power-law decrease. Parameter values: A(0) = 0.7, p 1 = 0.1, p 2 = 0.5 (see SI Appendix for details of this calculation).
Discussion
Membrane proteins commonly exhibit adaptive response (desensitization) to sustained stimuli, a widely encountered and well-studied phenomenon. Underlying this response one often finds, in addition to the fast stimulus-responding property of the proteins, a slow modulation in the total number of proteins available to respond rapidly to the stimulus. These dynamics are implemented in different biological systems in a variety of ways, ranging from additional protein conformation through changes in receptor location, internalization, degradation, and more. Our aim in this work was to construct a unified theoretical framework for this “activity-dependent inactivation” which, while admittedly abstract, captures the essence of the phenomenon and by virtue of its simplicity enables a deeper understanding. In particular we sought to make a more quantitative and precise connection to feedback circuit design, and to identify universal features and distinguish them from system-specific ones.
We studied a class of kinetic models with three states—active, inactive and unavailable—and two types of transition: stimulus-dependent transitions between active/inactive are rapid and reflect the instantaneous stimulus value. Stimulus-independent transitions to and from unavailability are slow and occur exclusively through either the active or inactive state, providing an effective integration over the system's past output with an appropriate temporal kernel. Using separation of timescales we showed that a universal feature of this class of models is the emergence of multiplicative feedback: system output is composed of an instantaneous input/output relation multiplied by the slowly modulated availability variable A(t). This, in turn, is the solution of a bilinear control equation, whose details depend on recovery kinetics.
The general description in terms of bilinear control and multiplicative feedback calls for further study of these special engineering design principles in the biological context (12, 26). It suggests that the study of the signal-processing properties of the biological system should go beyond its frequency response relying on linear system analysis (27). Systems including instantaneous nonlinear response functions (“static nonlinearities”) and multiplicative feedback have been studied as models for retinal gain control (28).
The details of how past activity is averaged and encoded by the slow availability variable is a system-specific property depending on the kinetics of recovery from unavailability. This, in turn, determines the functional form of adaptation to steady state following an abrupt change in input. The transient adaptive response is exponential in time if the unavailable state is truly a single state with a single rate of recovery. More complex, generally nonexponential adaptation follows if the unavailable state is a coarse-grained representation of a manifold of many sub-states; the structural principle that induces the feedback property to the system remains in this case as well. Using an extension of the Master equation applicable also to non-Markovian kinetics, we derived an explicit relation between the residence time distribution in the unavailable manifold and the form of adaptation to a step signal. In one example, we showed how power-law adaptation is obtained from a broad distribution of recovery times from unavailability. The steady-state response is also a system-specific property that can be either input-dependent, as in the case of first-order kinetics, or input-independent (“exact adaptation”), as in the case of zero-order recovery or in the example of power-law kinetics we have considered.
These results taken together, show how different functional forms of adaptation can all be implemented as special cases of the same fundamental mechanism. This observation points to the relative ease with which different forms of adaptive response can be evolved from one another. The flexible functionality of biochemical signal transduction has been emphasized and suggested to have evolutionary advantage in the context of photoreceptor amplification (29). Our results indicate that also adaptive response can have the flexibility to change between different forms, for example by changing concentration of an enzyme catalyzing a reaction, while retaining the basic feature of feedback in its state-space structure.
While biological feedback is most often discussed in the strict sense of direct interaction between upstream and downstream components of a pathway (30, 31), in the wide sense it is a flow of information from a system output back to its input. Such a flow can be an emergent property of system dynamics without physical interaction between output and input. The model analyzed here provides a simple and applicable explicit example of such emergent feedback. The implicit encoding of a feedback circuit by a dynamical system is a topic of much theoretical interest (32). In light of the results presented here, it would be interesting to consider the application of this theory to a broader class of systems and, in particular, to systems with multiplicative feedback and various forms of adaptive response.
Acknowledgments.
We thank Shimon Marom, Erez Braun, and Ron Meir for many discussions and for remarks on the manuscript and Alexander Iomin, Nahum Shimkin, Yuval Elhanati, and Avner Wallach for discussions. This research was supported by the Yeshaya Horowitz Association through the Center for Complexity Science.
Appendix: Positive Feedback
State-dependent inactivation can support also positive feedback. If transitions to the unavailable state occur selectively through the inactive state, high activity tips the balance towards recovery of unavailable molecules over a longer timescale, thus inducing further activity. These dynamics are described by the following three state scheme:
![]() |
The dynamical equations then read
![]() |
Assuming Δ = δ (1 − A), a similar analysis yields in this case
The form of Eq. 18 is reminiscent of Eq. 5, but, as expected, it differs in feedback sign. In Eq. 5 the activity weighted integral is subtracted from the availability variable such that the long-term response of the system is opposite in sign to its immediate response forming negative feedback. Eq. 18 shows an opposite behavior with the feedback branch added to the availability variable, resulting in the enhancement of the system initial response *. This form of adaptation is obtained from the general control scheme Fig. 3 with an exponential integration kernel and the integral term is added, rather than subtracted.
Footnotes
The authors declare no conflict of interest.
This paper is a PNAS Direct Submission. W.B. is a guest editor invited by the Editorial Board.
This article contains supporting information online at www.pnas.org/cgi/content/full/0902146106/DCSupplemental.
Note that although a positive feedback can be a potential cause for system instability, this system is stable since the variables here represent population fractions and are naturally bounded by 1.
References
- 1.Wark B, Lundstrom BN, Fairhall A. Sensory adaptation. Curr Opin Neurobiol. 2007;17:423–429. doi: 10.1016/j.conb.2007.07.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Tyson JJ, Chen KC, Novak B. Sniffers, buzzers, toggles and blinkers: Dynamics of regulatory and signaling pathways in the cell. Curr Op Cell Biol. 2003;15:221–231. doi: 10.1016/s0955-0674(03)00017-6. [DOI] [PubMed] [Google Scholar]
- 3.Brandman O, Meyer T. Feedback loops shape cellular signals in space and time. Science. 2008;322:390–395. doi: 10.1126/science.1160617. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Davis GW. Homeostatic control of neural activity: From phenomenology to molecular design. Annu Rev Neurosci. 2006;29:307–323. doi: 10.1146/annurev.neuro.28.061604.135751. [DOI] [PubMed] [Google Scholar]
- 5.Csete ME, Doyle JC. Reverse engineering of biological complexity. Science. 2004;295:1664–1669. doi: 10.1126/science.1069981. [DOI] [PubMed] [Google Scholar]
- 6.Marom S, Levitan IB. State-dependent inactivation of kv3 potassium channels. J Biophys. 1994;67:579–589. doi: 10.1016/S0006-3495(94)80517-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Marom S. Slow changes in the availability of voltage-gated ion channels: Effects on the dynamics of excitable membranes. J Membr Biol. 1998;161(2):105–113. doi: 10.1007/s002329900318. [DOI] [PubMed] [Google Scholar]
- 8.Barkai N, Leibler S. Robustness in simple biochemical networks. Nature. 1997;387:913–917. doi: 10.1038/43199. [DOI] [PubMed] [Google Scholar]
- 9.Tindall MJ, Porter SL, Maini PK, Gaglia G, Armitage JP. Overview of mathematical approaches used to model bacterial chemotaxis I: The single cell. Bull Math Biol. 2008;70:1525–1569. doi: 10.1007/s11538-008-9321-6. [DOI] [PubMed] [Google Scholar]
- 10.Ferguson SSG, Caron MG. G protein-coupled receptor adaptation mechanisms. Sem Cell Dev Biol. 1998;9:119–127. doi: 10.1006/scdb.1997.0216. [DOI] [PubMed] [Google Scholar]
- 11.Triller A, Choquet D. Surface trafficking of receptors between synaptic and extrasynaptic membranes: And yet they do move! Trends Neurosc. 2005;28:133–139. doi: 10.1016/j.tins.2005.01.001. [DOI] [PubMed] [Google Scholar]
- 12.Mohler RR. Mathematics in Science and Engineering. Vol. 106. New York: Academic; 1973. Bilinear control processes. [Google Scholar]
- 13.Csikász-Nagy A, Soyer OS. Adaptive dynamics with a single two-state protein. J R Soc Interface. 2008;5:S41–S47. doi: 10.1098/rsif.2008.0099.focus. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Springer MS, Goy MF, Adler J. Protein methylation in behavioral control mechnisms and in signal transduction. Nature. 1979;280:279–284. doi: 10.1038/280279a0. [DOI] [PubMed] [Google Scholar]
- 15.Yi TM, Huang Y, Simon MI, Doyle J. Robust perfect adaptation in bacterial chemotaxis through integral feedback control. Proc Natl Acad Sci USA. 2000;97:4649–4653. doi: 10.1073/pnas.97.9.4649. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Toib A, Lyakhov V, Marom S. Interaction between duration of activity and time course of recovery from slow inactivation in mammalian brain Na+ channels. J Neurosci. 1998;18:1893–1903. doi: 10.1523/JNEUROSCI.18-05-01893.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Nadler W, Huang T, Stein DL. Random walks on random partitions in one dimension. Phys Rev E. 1996;54:4037–4047. doi: 10.1103/physreve.54.4037. [DOI] [PubMed] [Google Scholar]
- 18.Thorson J, Biederman-Thorson M. Distributed relaxation processes in sensory adaptation. Science. 1974;183:161–172. doi: 10.1126/science.183.4121.161. [DOI] [PubMed] [Google Scholar]
- 19.Drew PJ, Abbott LF. Models and properties of power-law adaptation in neural systems. J Neurophysiol. 2006;96:823–833. doi: 10.1152/jn.00134.2006. [DOI] [PubMed] [Google Scholar]
- 20.Lundstrom BN, Higgs MH, Spain MJ, Fairhall AL. Fractional differentiation by neocortical pyramidal neurons. Nat Neurosci. 2008;11:1335–1342. doi: 10.1038/nn.2212. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Millhauser GL, Salpeter EE, Oswald RE. Diffusion models of ion-channels gating and the origin of power-law distributions. Proc Natl Acad Sci USA. 1988;85:1503–1507. doi: 10.1073/pnas.85.5.1503. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Gilboa G, Chen R, Brenner N. History-dependent multiple-time-scale dynamics in a single-neuron model. J Neurosci. 2005;25:6479–6489. doi: 10.1523/JNEUROSCI.0763-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Marom S. Adaptive transition rates in excitable membranes. Front Comput Neurosci. February, 2009 doi: 10.3389/neuro.10.002.2009. 10.3389/neuro.10.002.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Goychuk I, Hänggi P. Fractional diffusion modeling of ion channel gating. Phys Rev E. 2004;70:051915. doi: 10.1103/PhysRevE.70.051915. [DOI] [PubMed] [Google Scholar]
- 25.Bassingthwaighte JB, Liebovitch LS, West BJ. Fractional Physiology. Oxford, UK: Oxford Univ Press; 1994. [Google Scholar]
- 26.Snippe HP, van Hateren H. Dynamics of nonlinear feedback control. Neural Comput. 2007;19:1179–1214. doi: 10.1162/neco.2007.19.5.1179. [DOI] [PubMed] [Google Scholar]
- 27.Shankaran H, Resat H, Wiley HS. Cell surface receptors for signal transduction and ligand transport: a design principles study. PLoS Comput Biol. 2007;3:986–999. doi: 10.1371/journal.pcbi.0030101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.van Hateren H. A cellular and molecular model of response kinetics and adaptation in primate cones and horizontal cells. J Vision. 2005;5:331–347. doi: 10.1167/5.4.5. [DOI] [PubMed] [Google Scholar]
- 29.Detwiler PB, Ramanathan S, Sengupta A, Shraiman B. Engineering aspects of enzymatic signal transduction: Photoreceptors in the retina. Biophys J. 2000;79:2801–2817. doi: 10.1016/S0006-3495(00)76519-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Kholodenko BN. Cell signalling dynamics in time and space. Nat Rev Mol Cell Biol. 2006;7(3):165–176. doi: 10.1038/nrm1838. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Behar M, Hao N, Dohlman HG, Elston TC. Mathematical and computational analysis of adaptation via feedback inhibition in signal transduction pathways. Biophys J. 2007;93:806–821. doi: 10.1529/biophysj.107.107516. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Sontag ED. Adaptation and regulation with signal detection implies internal model. Systems Control Lett. 2003;50:119–126. [Google Scholar]













