Skip to main content
Frontiers in Computational Neuroscience logoLink to Frontiers in Computational Neuroscience
. 2026 Apr 20;20:1762692. doi: 10.3389/fncom.2026.1762692

Deterministic, stochastic, and mean-field PDE models in neuroscience

Coşkun Çetin 1,*, Jose Roberto Castilho Piqueira 2, Burhaneddin İzgi 3, Ayse Peker-Dobie 3,*, Semra Ahmetolan 3, Murat Özkaya 4
PMCID: PMC13136274  PMID: 42088462

Abstract

Large neuronal networks demonstrate complex dynamics across multiple scales, ranging from single-neuron excitability and spike-train variability to mesoscopic rhythms and whole-brain activity. Different types of differential equation models have been developed to comprehend these phenomena, connecting deterministic, stochastic, and mean-field descriptions. At the deterministic level, ordinary differential equation (ODE) models, including conductance-based neuron models, neural-mass systems, and whole-brain networks, summarize neural behavior through a reduced set of macroscopic variables. At the population level, mean-field partial differential equation (PDE) models such as Fokker-Planck, age-structured, kinetic, and neural field equations describe the evolution of probability or population densities over membrane-potentials, synaptic states, and other kinetic variables. These PDEs link single-neuron mechanisms to population-level activity and allow one to analyze bifurcations, oscillations and other collective patterns. Stochastic differential equation (SDE) models and their extensions that include jump-diffusion processes and stochastic PDEs (SPDEs) are widely used to describe random membrane fluctuations, irregular spike trains, synaptic plasticity and large-scale variability in neural activity. These stochastic models are also applied to neural data analysis, for example to quantify noise in electro-physiological recordings and to infer latent neural dynamics. Because variability and noise are central in neural systems, we devote more space to stochastic models but always relate them back to the surrounding ODE and PDE frameworks. This hierarchy of ODE, PDE, and SDE-SPDE models shows that the versatility of differential-equation-based approaches in neuroscience offers unified tools for multiscale modeling, neural signal processing, cognitive modeling, and the analysis of noisy neural systems. We also discuss some known numerical and computational approaches, especially for stochastic models and conclude by outlining open challenges, such as multiscale inference, control-oriented formulations and the integration of differential-equation models with modern machine-learning methods.

Keywords: differential equation models, stochastic differential equations, computational neuroscience, stochastic neural dynamics, mean-field partial differential equations, Fokker-Planck equations, numerical methods

1. Introduction

Large neuronal networks show collective behavior that spans asynchronous irregular spiking, coherent oscillations and spatially structured activity patterns. Theoretical frameworks developed to explain these phenomena are mainly based on differential equations and can, on different scales, be grouped into three broad classes: deterministic ordinary differential equation (ODE) models, stochastic process-based descriptions, and mean-field partial differential equation (PDE) models arising in the large-network limit (Breakspear, 2017; Brunel, 2000; Carrillo and Roux, 2025; Gerstner et al., 2014). Our aim is to show how these three levels fit together into a multiscale mathematical picture that links single-neuron dynamics to whole-brain activity.

At the level of single-neurons and neural populations, ODE models form the classical backbone of computational neuroscience (Dayan and Abbott, 2001; Gerstner et al., 2014). Conductance-based models of Hodgkin-Huxley type describe the evolution of the membrane-potential together with gating variables that characterize the kinetics of the ion-channel, while reduced systems such as the FitzHugh-Nagumo and Morris-Lecar models capture key features of excitability and firing patterns in low-dimensional settings, making them suitable for phase-plane and bifurcation analysis (FitzHugh, 1961; Hodgkin and Huxley, 1990; Morris and Lecar, 1981; Nagumo et al., 1962). At the population scale, neural-mass models of Wilson-Cowan type, and their extensions to whole-brain settings, describe the interactions among excitatory and inhibitory populations, multistability, and large-scale oscillations in terms of a small number of macroscopic variables, such as firing rate or mean membrane-potential (Breakspear, 2017; Jansen and Rit, 1995; Wilson and Cowan, 1972). For broader reviews of mean-field and neural-mass approaches in computational neuroscience, see (Carrillo and Roux 2025); (Deco et al. 2008). For a complementary textbook treatment from an applied-mathematics perspective, including delays and stochasticity, see (Coombes and Wedgwood 2023). However, this deterministic skeleton is frequently extended to explicitly account for the stochastic nature of spike generation, synaptic noise, and network heterogeneity (Laing and Lord, 2009; Schwalger et al., 2017; Wallace et al., 2011).

This motivates a second layer of modeling in which ODE-based dynamics are generalized to stochastic processes. Noisy integrate-and-fire equations, stochastic conductance-based models, random spike-time models, Hawkes-type point-processes, and Markovian descriptions of synaptic and neuronal states allow one to incorporate both intrinsic fluctuations (such as conductance fluctuation of ion-channels) and extrinsic randomness (e.g., from the synaptic activity of other cells) as a natural generalization of deterministic models (Brunel, 2000; Carrillo and Roux, 2025; Daley and Vere-Jones, 2003; Gerstner and Kistler, 2002; Hawkes, 1971; Wallace et al., 2011). Stochastic differential equation (SDE) models as well as their extensions that include delay equations and jump-diffusion processes provide a flexible framework to model single-neuron and network dynamics in continuous time, bridging deterministic drift functions and random perturbations. They have been used to describe spike trains and membrane-potential fluctuations, to model synaptic plasticity and learning, and to capture large-scale neural variability in data such as electroencephalography (EEG) and related electrophysiological recordings (Ghorbanian et al., 2013; Tajmirriahi and Amini, 2021). More recently, the combination of SDE-based models with machine-learning tools and experimental data has contributed to our understanding of neural oscillations, decision making, and learning (ElGazzar and van Gerven, 2024; Holmes et al., 2005; René et al., 2020). At the same time, these approaches raise challenges in parameter estimation, model selection, and interpretation of fitted SDEs in biological terms (René et al., 2020). Such stochastic frameworks provide a microscopic description at the level of single-neurons or small networks and at the same time offer a natural starting point for the derivation of mean-field PDE models that describe the behavior of very large networks.

The third layer consists of mean-field PDE models, which do not track individual neurons but rather the evolution of population densities over appropriate state variables (Cáceres et al., 2011; Laing and Lord, 2009). Parabolic Fokker-Planck equations, such as network noisy leaky integrate-and-fire (NNLIF) models (sometimes called nonlinear noisy leaky integrate-and-fire models), describe how the distribution of membrane-potentials evolves under synaptic input and noise, and have been used to study phenomena such as blow-up, synchronization, and the stability of stationary states in recurrent networks (Cáceres et al., 2011; Cáceres and Perthame, 2014). Hyperbolic age- or time-elapsed-structured equations organize the population according to the time since the last spike and provide a natural framework to analyze the effects of refractoriness, adaptation, and delay (Kang et al., 2015; Pakdaman et al., 2013, 2014; Schwalger and Chizhov, 2019; Schwalger and Lindner, 2015). These age or time-elapsed structured equations are also widely known in the computational neuroscience literature as refractory-density equations (Aviel and Gerstner, 2006; Chizhov and Graham, 2007, 2008; Gerstner, 2000, 2001; Schwalger and Chizhov, 2019). Kinetic Fokker-Planck systems of voltage-conductance or FitzHugh-Nagumo type retain additional variables such as synaptic conductances or recovery variables, yielding mesoscopic models that preserve important biophysical detail (Mischler et al., 2016; Perthame and Salort, 2013). Stochastic partial differential equations (SPDEs), in particular neural field PDEs and spatially extended formulations, play a central role in the study of cortical waves, pattern formation, spatial aspects of brain dynamics as well as stochastic versions of time-elapsed PDEs (Carrillo et al., 2023, 2024; Schmutz et al., 2023; Schwalger et al., 2017). One can also check chapters 11–12 of (Laing and Lord 2009) for their numerical solutions.

The aim of this review is to present these three modeling layers not as separate topics but as parts of a coherent chain that runs from deterministic ODE models of single- neurons, through stochastic descriptions of spike generation and synaptic variability, to mean-field PDE models for large interacting populations. Because the stochastic layer provides the key bridge between microscopic neural variability and mean-field PDE descriptions, the stochastic material is developed in greater detail than the deterministic ODE and PDE sections. The remainder of the paper is organized as follows. Section 2 provides a concise overview of deterministic ODE models, summarizing the basic structure of single-neuron, neural-mass, and whole-brain dynamical systems. Section 3 introduces a brief theoretical background and some numerical solution methods for SDEs, and stochastic models, including SDE-based neuron and network descriptions, point-process and renewal models, and Markovian synaptic state models. Section 4 presents the main principles used to derive mean-field PDEs from microscopic stochastic network models. We also discuss Fokker-Planck, age-structured, kinetic, and neural field equations, combining modeling aspects, mathematical theory, and representative numerical results in this section. Finally, Section 5 highlights open problems and future research directions related to multiscale modeling, data assimilation, control, and the interplay between these mechanistic models and modern machine-learning approaches.

2. Deterministic ODE models in neuroscience: a brief overview

This section reviews the main classes of deterministic ODE models that have shaped theoretical and computational neuroscience. The goal is not to provide an exhaustive historical survey, but to illustrate the types of dynamics that arise at different scales and to highlight those structural features that will reappear in stochastic and PDE-based formulations.

2.1. Theoretical background

Deterministic neuronal models are naturally formulated as finite-dimensional dynamical systems. Given a state vector x(t) ∈ ℝn (for example, membrane-potential together with gating or recovery variables), a general ODE is given by

(t)=F(x(t),t),

where F:ℝn×ℝ → ℝn is a vector field. In many neuroscience applications, F is autonomous F(x, t) = F(x), so the dynamics are fully determined by the current state. Under standard regularity assumptions on F (e.g., local Lipschitz continuity), the initial value problem x(0) = x0 defines a unique solution on some time interval and thus a flow Φt(x0) on state space (Ermentrout and Terman, 2010; Izhikevich, 2007).

An equilibrium (or fixed point) is a state x* such that F(x*) = 0. Linearizing the dynamics around x* yields

(t)=J(x*)y(t),  J(x*)=DF(x*),

where J(x*) is the Jacobian matrix of F at x*. The eigenvalues of J(x*) determine the local behavior of the nonlinear system: If all eigenvalues have strictly negative real parts, x* is (asymptotically) stable; if at least one eigenvalue has a positive real part, it is unstable. In low-dimension, phase-plane analysis provides a geometric picture of trajectories, allows one to classify excitable and oscillatory regimes, and facilitates the construction of bifurcation diagrams as the parameters vary (Ermentrout and Terman, 2010; Izhikevich, 2007). These tools will be used implicitly in our discussion of conductance-based and reduced single-neuron models.

Beyond fixed points, neuronal ODEs can exhibit limit cycles (periodic orbits), mixed-mode oscillations, bursting, and more complex attractors. Such behaviors often arise through local or global bifurcations (Hopf, saddle-node, homoclinic, etc.), in which qualitative and quantitative changes in long-time dynamics occur when parameters such as applied current or synaptic strength cross critical values. At population and whole-brain scales, these bifurcations underlie phenomena such as multistability between resting and active states, or transitions from asynchronous activity to rhythmic oscillations.

In many neural models, finite-transmission and processing-speed introduce explicit delays. Delay differential equations (DDEs) extend general ODEs by allowing the vector field to depend on past states,

(t)=F(x(t),x(t-τ1),,x(t-τm)),

representing, for example, synaptic or axonal conduction delays. Such systems arise naturally in neural-mass and whole-brain models with long-range connectivity and can support rich dynamics, including delay-induced oscillations and complex transient behavior. In the present review, however, we do not pursue DDEs in detail and focus instead on ODE models without explicit delays, their stochastic extensions, and the corresponding mean-field PDE limits.

2.2. Conductance-based single-neuron models

At the level of a single-neuron, deterministic models describe the membrane-potential together with auxiliary variables representing ion-channel kinetics. The classical example is the Hodgkin-Huxley formalism, in which the membrane-potential V(t) satisfies a current-balance equation of the form

CV.(t)=-kIk(V(t),x(t))+Isyn(t)+Iext(t), (1)

where C is the membrane capacitance, Ik denotes ionic currents (e.g., sodium, potassium, leak), Isyn is the total synaptic current and Iext represents externally applied input (Dayan and Abbott, 2001; Gerstner et al., 2014; Hodgkin and Huxley, 1990). Each ionic current is typically written as

Ik(V,x)=kmkpkhkqk(V-Ek),

where ḡk is a maximal conductance, Ek is the reversal potential, and mk, hk are gating variables that represent the fraction of open activation and inactivation gates for the channel type k. These gating variables satisfy first-order kinetics of the form

k=αk(V)(1-mk(-βk(V)mk=mk,(V)-mkτk(V),

with analogous expressions for hk. The vector x(t) collects all such gating variables. Together, Equation 1 and the gating equations define a nonlinear ODE system in which action potentials arise as emergent transients or oscillations.

Conductance-based models can be extended to include additional ion-channels, calcium and other intracellular variables, dendritic compartments, and detailed synaptic mechanisms. From a dynamical-systems viewpoint, they are relatively high-dimensional nonlinear ODE systems that can exhibit a variety of dynamical behaviors under parameter changes (Ermentrout and Terman, 2010; Izhikevich, 2007). These features will reappear, in aggregate form, at the population level.

Beyond reproducing action potentials, conductance-based models have been used to systematically classify different types of neuronal excitability and firing patterns. By varying maximal conductances, reversal potentials, or time constants, one can induce transitions between quiescence, tonic spiking, bursting and mixed-mode oscillations organized by local and global bifurcations in the underlying ODE system (Ermentrout and Terman, 2010; Izhikevich, 2007). This has led to robust classifications of neurons into, for example, type I and type II excitability according to the nature of the spike-onset bifurcation, with direct implications for how they synchronize in networks. Type I neurons can begin firing at arbitrarily low frequencies as the input current just crosses threshold (as in a saddle-node on invariant circle bifurcation), whereas type II neurons start firing at a finite, nonzero minimum frequency, typically through a Hopf bifurcation, and these differences have direct implications for how they synchronize in networks. Conductance-based models also serve as testbeds for parameter-estimation and model-reduction techniques, linking detailed ionic mechanisms to simplified descriptions that can be embedded into population and whole-brain models.

2.3. Reduced single-neuron models

To obtain simpler but dynamically faithful descriptions, reduced single-neuron models often approximate fast gating variables by their steady-state values and combine slower ones into a single recovery variable, leading to planar systems of the form

V.=F(V)-w+Iext,  =G(V,w),

where V denotes the membrane-potential, w is a recovery variable, and F, G are nonlinear functions. The FitzHugh-Nagumo model (FitzHugh, 1961; Nagumo et al., 1962) and Morris-Lecar model (Morris and Lecar, 1981) are prototypical examples. Their low-dimensionality makes them ideal for phase-plane analysis: one can characterize excitability types, identify stable and unstable manifolds, and track qualitative changes in firing patterns via bifurcation diagrams (Ermentrout and Terman, 2010; Izhikevich, 2007). Classical dynamical systems analyses of neuronal excitability and bursting, especially phase-plane and fast-slow geometric methods developed by Rinzel and co-workers, provide important mathematical background for such reduced models (Rinzel and Huguet, 2013; Rinzel and Lee, 1987).

Another important family is provided by integrate-and-fire models. In their simplest form, the subthreshold dynamics of V(t) follow a linear ODE such as

V.(t)=-1τm(V(t)-Vrest)+1CIsyn(t), (2)

where τm is the membrane time constant and Vrest is the resting potential. When V(t) reaches a fixed threshold Vth, a spike is said to occur and the potential is instantaneously reset to a value Vreset (often followed by a refractory period). Mathematically, these are hybrid dynamical systems that combine continuous ODE flows with discrete events (Gerstner and Kistler, 2002). In later sections, noisy and networked versions of Equation 2 will serve as starting points for the derivation of Fokker-Planck and age-structured PDE models describing the evolution of membrane-potential or time-since-last-spike densities.

Reduced neuron models are frequently used as building blocks in network studies where analytic tractability is paramount. Planar FitzHugh-Nagumo or Morris-Lecar units coupled on lattices, random graphs, or small-world networks have been employed to investigate how local excitability interacts with connectivity to generate waves, synchrony, and cluster states. In the case of integrate-and-fire populations, their simplicity allows for large-scale simulations exploring how recurrent structure, heterogeneity, and synaptic delays shape emergent phenomena such as asynchronous irregular activity, oscillations, and related collective phenomena (Augustin et al., 2017; Brunel, 2000; Gerstner et al., 2014). Phase reductions of limit-cycle oscillator models provide yet another viewpoint, in which each neuron is reduced to a phase variable and coupling is encoded via phase response curves; this connects neuronal modeling to the broader theory of coupled oscillators and Kuramoto-type systems (Breakspear et al., 2010).

2.4. Neural-mass and rate-based ODE models

At a mesoscopic level, it is often convenient to aggregate the activity of many neurons into homogeneous populations and to describe their dynamics in terms of macroscopic variables such as average firing rate, average membrane-potential, or synaptic activity (Dayan and Abbott, 2001; Deco et al., 2008; Gerstner et al., 2014; Montbrió et al., 2015). This leads to neural-mass and rate-based models.

A paradigmatic example is the Wilson-Cowan system for interacting excitatory and inhibitory populations (Wilson and Cowan, 1972). Let E(t) and I(t) denote the activities (for example, firing rates) of the excitatory and inhibitory populations. Their evolution is given by

τEĖ(t)=-E(t)+ϕE(wEEE(t)-wEII(t)+IE(t)),  τIİ(t)=-I(t)+ϕI(wIEE(t)-wIII(t)+II(t))

where τE, τI are time constants, wab are coupling strengths between populations, IE, II are external inputs, and ϕE, ϕI are typically sigmoidal gain functions. Variants of this model and related rate-based systems have been used extensively to study the emergence of oscillations, multistability, and pattern formation in cortical circuits (Ermentrout and Terman, 2010; Izhikevich, 2007; Wallace et al., 2011; Wilson and Cowan, 1972).

More elaborate neural-mass formulations, such as the Jansen-Rit model and its descendants, introduce additional state variables to represent postsynaptic potentials and synaptic currents (Jansen and Rit, 1995). Each population is then described by a small system of coupled ODEs with biophysically motivated parameters.

A notable recent development in mesoscopic population modeling is the emergence of exact mean-field, or next-generation, neural-mass descriptions for heterogeneous spiking networks (Coombes, 2023; Coombes and Wedgwood, 2023). In the quadratic integrate-and-fire/θ-neuron setting, these reductions connect microscopic spiking dynamics to closed macroscopic equations for quantities such as the firing rate, mean membrane potential, and, through Kuramoto/Ott-Antonsen-type descriptions, population synchrony (Devalle et al., 2017; Luke et al., 2013; Montbrió et al., 2015; Ott and Antonsen, 2008). This is conceptually important because firing rate is no longer treated as a purely phenomenological variable, but becomes dynamically linked to within-population synchrony. Extensions of this framework have also been used to describe fluctuation-driven population dynamics and synaptic shot noise in sparse balanced networks (Goldobin et al., 2025, 2021).

Neural-mass and rate-based models are often fitted directly to mesoscopic recordings, for example, EEG, MEG or local field potentials, to infer effective connectivity and to interpret pathological rhythms, thereby linking microscopic mechanisms to experimentally observed dynamics (Breakspear, 2017; Jansen and Rit, 1995). They have also been used to model cognitive operations such as working memory, attention, decision making and winner-take-all dynamics by exploiting multistability and nonlinear gain functions; more biophysically grounded recurrent circuit formulations have been especially influential in studies of working memory and related cognitive functions (Brunel and Wang, 2001; Compte et al., 2000). In this sense, neural-mass models occupy a central position between microscopic spiking descriptions and macroscopic imaging data: they incorporate biophysically interpretable parameters while remaining low-dimensional enough to permit systematic bifurcation analysis and parameter exploration (Augustin et al., 2017; Breakspear, 2017; Deco et al., 2008; Greven et al., 2026).

2.5. Whole-brain ODE networks

On the macroscopic, whole-brain scale, neural-mass or mean-field ODE models are often assigned to anatomically defined brain regions and coupled through structural connectivity matrices derived from diffusion-weighted MRI or tract-tracing data (Breakspear, 2017; Ritter et al., 2013). Denoting by Xi(t) the state vector associated with the region i (e.g., local firing rates, mean membrane-potentials, and synaptic variables), one obtains a high-dimensional ODE network of the form

i(t)=F(Xi(t))+jKijG(Xj(t-dij))+Ii(t), (3)

where F encapsulates the local neural-mass dynamics, Kij is the structural connectivity from region j to region i, dij is an effective transmission delay and Ii(t) denotes an external or noise-like input. Depending on the choice of local model and coupling, Equation 3 can reproduce key resting-state features, transitions between metastable patterns, and pathological regimes such as seizure-like activity (Breakspear, 2017; Deco et al., 2011; Naze et al., 2015).

From a mathematical perspective, Equation 3 defines a large system of coupled nonlinear ODEs, sometimes with delays, on a network. Questions of interest include the existence and stability of fixed points and limit cycles, the possibility of multistable regimes, the conditions for synchronization and partial synchrony, and the influence of network topology and delays on emergent dynamics (Breakspear, 2017).

Whole-brain models are typically constrained by structural connectivity matrices obtained from diffusion MRI or tract-tracing data, and their parameters are tuned to reproduce empirical measures such as functional connectivity, power spectra, or resting-state networks (Breakspear, 2017; Deco et al., 2011). Bifurcation analysis and numerical continuation have been used to identify parameter regimes that support realistic levels of metastability and dynamically switching network configurations, and to relate changes in coupling strength, conduction delays or regional excitability to shifts between healthy and pathological regimes, including seizure-like dynamics. These models thus provide a bridge between structural connectivity, dynamical systems theory, and clinical neuro-imaging, and they are natural candidates for control and data-assimilation approaches discussed later in the review.

2.6. Numerical exploration and bifurcation analysis

The deterministic ODE models reviewed so far are typically analyzed in combination with numerical simulation and bifurcation tools. For single-neuron models, direct time-stepping schemes such as forward or adaptive Runge-Kutta methods allow one to visualize trajectories, firing patterns, and responses to time-dependent inputs. In reduced planar systems of FitzHugh-Nagumo or Morris-Lecar type, phase-plane plots with nullclines, vector fields, and trajectories give a geometric picture of excitability and spike generation. They also provide a natural way to identify homoclinic orbits, limit cycles, and separatrices between different dynamical regimes (Ermentrout and Terman, 2010; Izhikevich, 2007).

In higher-dimensional conductance-based models, numerical continuation software (such as XPPAUT, AUTO, or MatCont) is routinely used to track equilibria and limit cycles as parameters vary, to locate Hopf, saddle-node, or homoclinic bifurcations, and to construct bifurcation diagrams in one- or two-parameter planes. These methods have been instrumental for classifying excitability and bursting mechanisms in Hodgkin-Huxley-type systems.

At the population and whole-brain scale, similar tools are applied to neural-mass and large-scale network models. The Wilson-Cowan and Jansen-Rit systems, and their whole-brain generalizations, are typically explored by combining direct numerical integration of coupled ODEs with continuation of steady states and periodic orbits. This allows one to chart regions of multistability, to identify bifurcation-induced transitions between asynchronous and rhythmic regimes, and to relate parameter changes (for example, in coupling strength or external input) to experimentally observed shifts between resting-state, oscillatory, and seizure-like activity (Breakspear, 2017; Jansen and Rit, 1995). In this sense, numerical bifurcation analysis forms a bridge between the abstract dynamical-systems framework introduced above and the phenomenology of realistic biophysical and whole-brain models.

3. Stochastic models of neural activity

Neural systems operate in an environment where information processing is affected by the electro-physiological properties of neurons and their bifurcation dynamics (Ermentrout and Terman, 2010; Izhikevich, 2007; Wallace et al., 2011) as well as the variability and noise of intrinsic cellular mechanisms and external input (Dumont et al., 2024; Fasoli, 2013; Laing and Lord, 2009; Pisarchik and Hramov, 2023; Saarinen et al., 2008). Although deterministic models provide valuable insight into the biophysical and network-level principles of neuronal dynamics, they often fail to capture the randomness observed in real neural data. Stochastic models offer a convenient framework for integrating these sources of randomness to model fluctuations in membrane-potentials, probabilistic spike generation, synaptic transmission variability, noisy limit cycles and large-scale network irregularities. As in many other applications of science and engineering, the simplest procedure to integrate randomness into deterministic models has been adding a Gaussian (or similar) white noise process to a deterministic dynamical system with a suitable intensity or standard deviation. For more complex models with autocorrelation functions, “colored noise” or other variations are incorporated into such models rather heuristically (Chizhov and Graham, 2007, 2008; Laing and Lord, 2009; Mao, 2011; Oksendal, 2013; Schwalger and Lindner, 2015). Both the microscopic origins and mesoscopic/macroscopic manifestations of neural variability can be described more rigorously via probabilistic models based on SDEs as well as their jump-diffusion, delay and SPDE versions (Cáceres et al., 2011; Carrillo et al., 2023, 2024; Dumont et al., 2024; Pietras et al., 2020; Schmutz et al., 2023). In this section, first a brief theoretical background and a review of commonly used numerical solution methods for SDEs are given. Then, several stochastic models in neuroscience are discussed in giving meaning to noisy experimental recordings, by highlighting the importance of randomness in the development of neural computation, population dynamics and parameter estimation.

3.1. Theoretical background

In this subsection, we introduce some definitions, notations, and known results for SDEs. We assume that the audience is familiar with probability distributions of random variables, conditional probability, covariance/variance, and probability spaces. Then, a stochastic process {Xt:tT} is an indexed family of random variables in a given probability space, where each Xt represents the state of the system at time t.

Markov property: A stochastic process Xt is said to satisfy the Markov property (or it is a Markovian process) if for any time t0 ≥ 0 and any n-dimensional state space S, the conditional probability distribution of the process at a future time tt0 depends only on the state at time tn, and not on the history before that time. Formally, the Markov property is defined as: For any measurable subset AS and t0 < t1 < ... < tn < t,

P(XtA|Xt0=x0,Xt1=x1,...,Xtn=xn)=P(XtA|Xtn=xn)

where Xt0, Xt1, ..., Xtn are the past states of the process up to time tn, and Xt is the state of the process at a future time t > tn.

Wiener process: A Wiener process (also called Brownian motion) is a Markovian stochastic process {Wt = W(t), t ≥ 0} that satisfies the following conditions:

  • W(0) = 0 and W(t) − W(s) ~ N(0, ts) for t>s, which means that the increments are Gaussian processes with mean 0 and variance ts. In particular, W(t) ~ N(0, t) for all t>0

  • For any sequence of time 0 ≤ t1t2 ≤... ≤ tn, the increments W(t2) − W(t1), W(t3) − W(t2), ..., W(tn) − W(tn − 1) are independent random variables, which means that future increments are independent of the previous increments

  • The process {W(t)} has continuous paths with probability one.

Stochastic differential equations : A stochastic differential equation (SDE) is a more general form of an ODE that involves a stochastic process to account for randomness in a dynamic variable Xt. Here, our focus is on Ito-type SDE, which is of the Markovian form

dXt=f(t,Xt)dt+g(t,Xt)dWt, (4)

where Xt is the random process at time t, f(t, Xt) is the drift term representing the deterministic part of the system, g(t, Xt) is the diffusion term denoting the stochastic part of the system, and dWt is the "differential" term for a Wiener process Wt. Equation 4 is usually written in an integral form which provides a more rigorous probabilistic representation. Moreover, its vector form can be used for higher dimensional state variables and several sources of noise (based on a multidimensional Brownian motion process). In that case the drift term is also a vector function and the diffusion term is a matrix function of a suitable size. However, such scalar or vector SDEs may not capture sudden or discontinuous changes/impulses directly. Then, the resulting discontinuous behavior of the random process can be modeled by adding a jump term to Equation 4 (Platen and Bruti-Liberati, 2010). These generalized processes are called jump-diffusion processes. Some examples and applications are given in subsection 3.3.

Remark: Note that a perturbed version of an ODE (e.g., the examples introduced in section 2) using a white noise process with a suitable deterministic intensity σ(t) can be described as an Ito-type SDE with the additive “noise” or diffusion term of the form σ(t)dWt. Then, the corresponding stochastic integral, 0tσ(s) dWs, is a Gaussian process with mean zero and variance 0tσ2(s)ds. The differential rules for Ito-type SDEs and their differentiable functions are governed by Ito stochastic calculus, more explicitly by a change-of-variable formula, called the Ito rule (Kloeden and Platen, 1992; Mao, 2011; Oksendal, 2013). Additionally, if jump-diffusion models are considered in the analyses, the jump-adapted approximation versions of these methods and their applications would take an important role (see İzgi, 2015, Chapter 4, Platen and Bruti-Liberati, 2010 and references therein).

3.2. Numerical solution methods for stochastic differential equations

Even though linear and some special nonlinear SDEs have explicit solutions, most nonlinear SDEs are solved using suitable numerical methods that include the Euler-Maruyama method and its several variants, as well as Milstein-type methods and other generalizations like Runge-Kutta versions. We present some common methods that are used in (or have the potential to improve) neuroscience models below.

Euler-Maruyama method: (Kloeden and Platen, 1992; Laing and Lord, 2009) The Euler-Maruyama (E-M) method approximates the solution to the SDE in discrete time steps. It is analogous to the Euler method for solving ODEs numerically, but this version also includes a stochastic component. If X(t) = Xt solves an SDE of the form

dXt=f(t,Xt)dt+g(t,Xt)dWt,

on an interval [0, T], we consider a partition of [0, T], 0 = t0<t1 < ... < tn = T, and let xi denote an approximation to X(ti) for each i = 0, 1, 2, ..., n. Then, starting with the initial condition X(0) = x0, which can be a constant or another random variable, the explicit (or forward) E-M method approximates this continuous-time process X(t) by its discrete-time E-M counterpart xi via iterations of the form

xi+1=xi+f(ti,xi)Δt+g(ti,xi)ΔWi,

where ΔWi is the random increment of the Wiener process with a Gaussian distribution. In particular, if a uniform partition is used with mesh size Δt, then ΔWi~ΔtN(0,1) can be used for simulating its paths via random numbers from the standard normal distribution. It is well known that the E-M procedure converges (strongly) with rate 0.5 as Δt converges to zero if the drift coefficient satisfies a uniform Lipschitz condition and both the drift and diffusion terms have linear growth in the state variable x (Kloeden and Platen, 1992). Despite being simple and practical, this method is not suitable for more general equations, e.g., when a coefficient is only locally Lipschitz, or it has superlinear growth due to positive probability of sample path blow-ups and instability issues (İzgi and Çetin, 2018).

Milstein Method:(İzgi and Çetin, 2017; Kloeden and Platen, 1992) The Milstein method is another numerical scheme for solving SDEs, providing an improvement over the Euler-Maruyama method in terms of accuracy (improving the strong convergence rate to 1 under also smoothness assumptions on the diffusion coefficient). It is suitable for SDEs whose (differentiable) diffusion term is easy to differentiate with respect to the state variable. Given an SDE

dXt=f(t,Xt)dt+g(t,Xt)dWt,

the Milstein method improves upon the E-M method by adding an extra term involving the first partial derivative of the diffusion function g(t, xt). Then, a refined version of explicit E-M method that is called the explicit Milstein method has the following iterations at the discretized mesh points ti with approximate solutions xi to X(ti):

xi+1=xi+f(ti,xi)Δt+g(ti,xi)ΔWi          +12g(ti,xi)g(ti,xi)((ΔWi)2-Δt)

where g(ti,xi) is the derivative of the diffusion term with respect to xt. Even though the explicit Milstein method is an improvement over the explicit E-M method, it suffers from the same divergence and instability phenomena when applied to only locally Lipschitz coefficients with superlinear growth, like stochastic FitzHugh-Nagumo models.

Alternate Euler-type methods: (İzgi and Çetin, 2018; Kloeden and Platen, 1992) In order to ensure convergence and stability of discretized numerical solutions, several alternate versions of E-M methods have been introduced. Some of them are modified to capture desirable properties of the actual solution (e.g., positivity, ergodicity etc.). The implicit E-M method for SDEs is a stochastic variant of the implicit Euler method that provides enhanced stability compared to the classical E-M and Milstein methods, especially for stiff problems with nonlinear drift terms over longer time periods [0, T]. It allows for larger time steps without numerical blow-up with the same strong convergence order of 0.5 (and weak convergence order of 1). For a given stochastic differential equation of the form in Equation 4, the (fully) implicit E-M method is given by the update rule below:

xi+1=xi+f(xi+1,ti+1)Δt+g(xi+1,ti+1)ΔWi+1.

A more practical version is drift-implicit E-M method, sometimes called partially implicit method, though the terms “partially implicit” and “semi-implicit” are used for several other types of numerical schemes including stochastic theta methods (for additively combining explicit and drift-implicit versions) and multiplicative combination of forward and backward values (xi and xi+1 terms) in approximating the drift term. For an SDE of the form in Equation 4, iterations of the drift-implicit E-M method are given by

xi+1=xi+f(xi+1,ti+1)Δt+g(xi,ti)ΔWi+1

where the drift term f(xi+1, ti+1) is implicit, while the diffusion term g(xi, ti) is explicit.

The drift-implicit method provides superior stability and better computational efficiency for stiff SDEs compared with the explicit Euler and Milstein methods (requiring small Δt steps and stronger assumptions on the drift term). Split-step methods also possess desirable convergence, stability, and long-term ergodicity properties in solving stiff SDEs. Some other methods that are known to converge under suitable monotonicity and polynomial growth conditions include truncated and tamed E-M methods. Each method has certain advantages depending on the type of nonlinearity, monotonicity, and growth conditions, step-size requirement, ease of implementation, computational efficiency, and empirical convergence behavior. For example, fully implicit, drift-implicit, and split-step methods have very strong stability properties, but they may require solving a nonlinear equation (e.g., via Newton method) at each iteration (İzgi and Çetin, 2018; Kloeden and Platen, 1992; Wang and Li, 2010). Truncated E-M algorithm restricts the growth of the coefficients to prevent the blow-up for solution processes or keep them positive if the solution is known to be positive. They provide improved stability and guarantee strong convergence of the method despite the fact that it involves a slight algorithmic bias in the numerical solution (Mao, 2015).

Split-Step Euler method (İzgi and Çetin, 2018; Wang and Li, 2010): Split-Step Euler (SSE) method approximates the solution of an SDE by separating its components, providing superior stability for stiff SDEs compared to explicit schemes. The SSE method first handles the stiff, deterministic drift part implicitly and then utilizes the stochastic diffusion part explicitly in a separate stage. This approach provides stability for the stiff components of the SDEs by allowing larger time steps for the solution process with the same convergence order of 0.5.

For SDE given in Equation 4, the drifting split-step Euler method is

x¯i=xi+f(x¯i,ti)Δt,xi+1=x¯i+g(x¯i,ti)ΔWi.

Moreover, if the first step is considered for the diffusion term instead of the drift, then the diffused version of the SSE method can be obtained, similarly. One drawback of SSE methods is that a nonlinear equation may need to be solved in the first stage at each iteration. One way to overcome this issue is to consider their semi-implicit versions for suitable drift functions (İzgi and Çetin, 2017, 2018). For example, consider a scalar logistic-type SDE of the form

dX(t)=(AX(t)-δXr(t)(dt+σ(X(t))dW(t),0<tT,

with the initial condition X(0) = x0, where r = 2 or an odd positive integer r ≥ 3, δ ≥ 0, A ∈ ℝ, and σ(·) satisfies a linear growth condition.

Then, the semi-implicit split-step (SISS) method for this scalar SDE focuses on approximating the term a(y) = Ay − δyr with ρ1(x,y)=Ay-δyxr-1, or with ρ2(x,y)=Ax-δyxr-1. Starting with the former choice, if we solve y = x1(x, y)Δ for y, then we obtain the expressions

fΔ(x)=x1+Δ(δxr-1-A),  aΔ(x)=a(x)1-Δ(A-δxr-1),

where fΔ(x) = xaΔ(x). These functions are well-defined if δxr − 1A ≥ 0 or if Δ is sufficiently small. It is easy to verify that aΔ(x) converges uniformly to a(x) in compact sets as Δ → 0 (see İzgi and Çetin, 2018, 2021).

A similar result is obtained if the latter approximation ρ2(x, y) is used. Then, several alternative versions of the SISS methods can be constructed (İzgi and Çetin, 2018). A simple version uses the following (now explicit) iterations: xi+1=fΔ(xi)+σ(xi)ΔWi+1 where fΔ(x) and aΔ(x) are as above. So, this SISS version corresponds to the E-M discretizations applied to an SDE that has a new drift term aΔ(x), instead of a(x). On the other hand, one may consider the Milstein-type SISS methods in İzgi and Çetin (2019), whose order of strong convergence is 1, as an extension of the Euler-type SISS methods.

3.3. Existing work of SDEs on mathematical neuroscience

In this section, we provide an integrated overview of stochastic models in neuroscience during the last few decades, highlighting general trends, contributions, strengths, and limitations of existing research in mathematical neuroscience. Incorporation of randomness to neural models in literature has involved Brownian motion (from white noise), Ornstein-Uhlenbeck process or Langevin equation (from colored noise), Poisson process (from jumps and arrival times of inputs), Boltzmann machine (for binary neuron models), and Markov chain models (with suitable transition probabilities and discretized states), among others (Burkitt, 2001; Dayan and Abbott, 2001; Fasoli, 2013; Greven et al., 2026; Pietras et al., 2020; Schwalger et al., 2017; Wallace et al., 2011; Wegman and Habib, 1992; Yasue et al., 1988). However, our focus in this review is mainly on models which are based on Ito-type SDEs, Poisson processes and jump-diffusion models.

In order to model the kinetics of the relative membrane-potential X(t) of N neurons in a nerve system with mutual electric interactions via neural connections, an N-dimensional Wiener process W(t) and a vector SDE of the form

dX(t)=A(t)dt+vdW(t),

are introduced in (Yasue et al. 1988). In this model, each component xi(t) has a drift term, Ai(t), which represents the mean forward current of the neuron i, as the total electric current flowing into its membrane. They use the least action principle and variational methods to derive a fundamental equation, which is a complex neural wave equation. The proposed model captures fundamental perceptual processes and memory mechanisms.

The sources, forms, and functionality of noise/uncertainty terms in neurodynamics have been extensively discussed in the literature. The noisy input from presynaptic neurons may regulate firing, explain subthreshold vs. suprathreshold activities, break up phase locking (or produce a stochastic phase locking called skipping), and improve encoding of incoming signals via stochastic resonance. Noise is present at all stages of sensory processing and significantly affects variability in both neural and behavioral responses. It may also contribute to balancing energy cost and reaction times in neural design. Thus, noise is not just a nuisance term, but a helpful part of how the brain processes and encodes information and makes decisions (Fasoli, 2013; Laing and Lord, 2009; Pisarchik and Hramov, 2023). The (constructive and more general) regulation effect of noise to enhance firing is known as coherence resonance (Deco and Schürmann, 1998; Laing and Lord, 2009; Pisarchik and Hramov, 2023), while anticoherence resonance refers to the diminishing regularity effects of noise intensity (Pisarchik and Hramov, 2023).

In order to study how noise can enhance information transfer in central neurons through stochastic resonance, a stochastic integrate-and-fire model for the subthreshold membrane-potential and its jump-diffusion version is adopted in (Deco and Schürmann 1998). This approach combines an Ornstein-Uhlenbeck SDE with a Poisson-distributed input spike train (from somatic synapses) as a jump-diffusion model:

dV(t)=(V(t)τ+μ)dt+σdW(t)+ωdS(t),

where τ denotes the decay of the membrane-potential in the absence of the input signal, ω represents soma-synaptic-strength, and S(t) is a homogeneous Poisson process as a train of Dirac delta functions. More explicitly, S(t) satisfies dS(t)dt=iδ(t-ti) over all spike times ti which are governed by a Poisson process with fixed intensity λ (Deco and Schürmann, 1998). Jump-diffusion models also appear in the applications of synaptic plasticity. For example, (Robert and Vignoud 2021) used an SDE for the time evolution of the post-synaptic membrane-potential, and the neuron spikes are modeled by an inhomogeneous Poisson process.

When the average overall synaptic input is large, the corresponding Poisson process can be approximated by a suitable Gaussian process. Then, the synaptic current dynamics with “white noise” would follow an Ornstein-Uhlenbeck process. Consequently, this uncertainty appears on the voltage dynamics in the form of “colored noise”. An application to the stochastic FitzHugh-Nagumo model with additive and periodic stochastic forcing can be written in the following SDE form (with a slight change of notation):

dV(t)=(bV(V0.5)(1V)cw(t)+dsin(ft)+I(t))dt           +σdZ(t),

where w(t) is a slow recovery variable (which depends on V in a linear ODE), I(t) is the bias current, f is the forcing frequency and Z(t) is an Ornstein-Uhlenbeck process. For a summary of these ideas and references, one can check (Laing and Lord, 2009, Chapter 4). Note that such a system of nonlinear equations can be efficiently solved via the SISS methods described in subsection 3.2 of this manuscript.

By focusing on two independent Poisson processes (for excitatory and inhibitory postsynaptic potentials, respectively), the behavior of balanced neurons with reversal potential is analyzed in (Burkitt 2001) using a stochastic version of the leaky integrate-and-fire (LIF) model. This stochastic line also connects to classical first-passage-time and diffusion-based analyses of leaky integrate-and-fire models (Ditlevsen and Lansky, 2007; Lansky and Ditlevsen, 2008). Related analytical work has further developed fluctuation-dissipation and cross-correlation-response relations for spiking systems under diffusive and shot-noise drive (Lindner, 2022; Stubenrauch and Lindner, 2024). For balanced neurons that receive periodic input, the spiking rate is modeled with an inhomogeneous Poisson process, and the relationships between the timing of these periodic synaptic inputs and the output spikes are studied. Using an Ito-type SDE model for LIF neurons, rapid random fluctuations (RRF) in neurons are studied in (Hong et al. 2016) to explain how brain signals propagate. The model captures RRF in neuronal membrane potentials and offers analytical support for the presence of a force that drives signal movement in the brain.

By adding Brownian noise to the Hodgkin-Huxley model, an Ito-type SDE is used to model voltage-gated ion-channel behavior in (Saarinen et al. 2008) with a focus on cerebellar granule cells (which are small excitatory neurons). They form an SDE model with the activation and inactivation of six different ionic conductances:

dX(t)=(αx(Vm)(1-X(t))-βx(Vm)X(t))dt+σdW,

where X(t) represents the gating variable for the ion-channel type in question, αx and βx are the rate functions of activation or inactivation processes, respectively, and W is the Brownian motion. They emphasize the role of the SDE approach for more accurately and efficiently modeling the random behavior of neurons and ion-channels, compared to Markov chain and deterministic models. They reproduce irregular firing observations of cerebellar granule cells more realistically and efficiently than previous stochastic methods by also using the expectation-maximization algorithm. This method has the potential to investigate intricate neural network dynamics in the cerebellum and other neuron types as well (Saarinen et al., 2008).

Another application of stochastic models to improve deterministic methods is modeling cerebrospinal fluid (CSF) flow and intracranial pressure fluctuations observed in clinical data. For example, a stochastic version of the Marmarou model is developed in (Raman 2011) by introducing the following Ito-type SDE for the dynamic flow of CSF:

dp={EpI(t)Ep(ppb)R}dt+σEpdW,

where p is the CSF pressure, pb is the basline pressure, E is the cerebral elasticity and I is the rate of external volume addition. This SDE has an explicit analytical solution (as a stochastic Verhulst equation), but its more complex versions and similar quadratic models [like the quadratic integrate-and-fire models in (Laing and Lord 2009)] can be solved using the stable numerical methods discussed in Subsection 3.2. One of the key findings of this study is that when noise reaches a certain level, the probabilities change quickly based on how easily CSF flows and how strong the noise is (Raman, 2011).

By utilizing perturbative field-theoretic and path integral methods, the moments of commonly used SDEs such as FitzHugh-Nagumo and Ornstein-Uhlenbeck models are studied in (Chow and Buice 2015). While these methods are complex, they are powerful tools in mathematical neuroscience and can also be applied to study larger systems such as networks of interacting neurons or systems with fixed disorder.

Recent literature and dissertations in stochastic neural dynamics also include models with multiple variables usually coupled together to explain how one or more brain regions interact, how to predict spiking behavior of neurons, or how the membrane potential dynamics are related to intracellular movement. For example, the following (reduced) collection of coupled nonlinear SDEs is proposed in (Deco et al. 2014) for the global brain dynamics for the excitatory (E) and inhibitory (I) variables:

      Ii(E)=WEI0+w+JNMDASi(E)+GJNMDAjCijSj(E)JiSi(I)             +Iexternal       Ii(I)=WII0+JNMDASi(E)Si(I)+λGJNMDAjCijSj(E)       ri(I)=H(I)(Ii(I))=αIIi(I)bI1exp(dI(aIIi(I)bI))dSi(E)(t)dt=Si(E)τE+(1Si(E))γri(E)+σvi(t))dSi(I)(t)dt=Si(I)τI+ri(I)+σvi(t),

where ri(E,I) denotes the firing rate of the excitatory (E) or inhibitory (I) population in brain area i, Si(E,I) is the average excitatory or inhibitory synaptic gating variable at area i, and Ii(E,I) represents the corresponding input current to the excitatory or inhibitory variable, respectively. Their approach simplifies a high-dimensional system of neurons interacting randomly by treating groups of elements as independent entities in a dynamic-mean-field model (Deco et al., 2014).

By expanding the Hodgkin-Huxley neuron model to a system of four SDEs, (Tierney 2024) investigates the spiking behavior of a neuron to provide a more realistic understanding of the dynamics of membrane potential (voltage, V) as well as the gating variables (m, n, h) at a given time in the ion channel:

 dV=[(I-(INα+IK+IL))/C]dt+σVdW1(t)dm=[αm(V)(1-m)-βm(V)m]dt+σmdW2(t) dh=[αh(V)(1-h)-βh(V)h]dt+σhdW3(t) dn=[αn(V)(1-n)-βn(V)n]dt+σndW4(t)

where m and n represent the proportions of sodium or potassium gates open, respectively, and h refers to proportion for sodium ion channel closed. Moreover, σV, σm, σh, and σn denote the diffusion coefficients on each of the four states of the model. The model is solved via the E-M method, and an ensemble Kalman filter method is implemented for data assimilation to describe how the stochasticity influences a neuron's behavior (Tierney, 2024).

A non-linear stochastic model of nine variables is introduced in Ðorđević et al. (2023) for dopamine dynamics in rat striatum. The model captures synthesis, storage, release, uptake, and metabolism, whose noise terms are driven by a nine-dimensional Brownian motion process. The article also proves the existence, uniqueness, and positivity of solutions and presents boundaries for the moments of the SDEs. They use a positivity-preserving and convergent implicit numerical method to simulate coordinate processes and provide visualizations.

A multidimensional stochastic model is considered in (Ditlevsen and Samson 2016) to estimate biophysical parameters of neuronal membrane-potential dynamics from intracellular recordings. The model uses an m-dimensional Wiener process Wt, a p-dimensional parameter vector θ, and a d-dimensional process of the form Xt = (Vt, Yt) in the Ito-type SDE

dXt=b(Xt;θ)dt+Σ(Xt;θ)dWt, (5)

where Vt denotes the membrane-potential, and Yt represents the d − 1 dimensional vector of unobserved variables, including gating variables and the proportion of open ion channels (for a specific ion) or specific synaptic input. In Equation 5, the expressions b and Σ are the corresponding vector drift and matrix diffusion functions, respectively. The main challenge is parameter estimation, as only the membrane-potential is directly observed, making it harder to infer hidden states from these partially observed complex models.

Many research papers study brain damage/injuries and diseases, including seizures and Alzheimer's disease, via stochastic models and their numerical solutions, occasionally with the aid of machine learning methods. For example, a stochastic brain network model is presented in Jirsa et al. (2017) for brain interventions by incorporating individual structural data, epileptogenic zones, and MRI lesions. Using high-performance computing and simulations of SDEs via the E-M method, they conduct systematic parameter exploration and model validation for the development of tailored therapeutic strategies. A high-order linear SDE dynamic system in a d-dimensional space is used in (Chen et al. 2017) to model and optimize the movement trajectory (selected by the central nervous system) of the human arm. By extending the integrate-and-fire model (for stochastic neuron activity) and employing a stochastic control method, analytical solutions for the optimal trajectory, velocity, and variance are explicitly derived in this linear model, and a three-dimensional example is provided.

By utilizing advanced connectometry and a stochastic model, Sinha et al. (2019) studies significant white-matter changes in patients with idiopathic generalized epilepsy (IGE), including damage to the cingulum, fornix, and superior longitudinal fasciculus, as well as stronger thalamo-cortical tract integrity. Incorporating these findings into a computational model, the study suggests that enhanced cortico-reticular and impaired cortico-cortical connections drive seizure-like dynamics, highlighting impaired cortical-to-thalamic connectivity as a potential therapeutic target. Seizure transitions were simulated using an SDE model with a noise term, numerically solved via the E-M method with a very small step size.

A new EEG signal modeling approach using SDE with self-similar fractional Lévy stable processes (the innovation model) is introduced in (Tajmirriahi and Amini 2021) to extract discriminative features for seizure detection. By using a scale-invariant fractional derivative and fitting a probability distribution to the derived signals, the features are then used to train a support vector machine (SVM) classifier. This approach achieves very high classification accuracy (of more than 99 percent) in distinguishing between healthy and epileptic EEG segments. The simple and accurate method is also computationally efficient as it does not require signal decomposition. Another study that employs an innovation process and machine learning methods (SVM, K-nearest neighbor, and random forest) for high classification accuracy is the SDE model of (Raisi-Nafchi et al. 2025b), which is applied to post-contrast T1-weighted MRI images for grading astrocytomas (a type of brain tumors) before surgery. The model utilizes a fractional Laplacian filter to capture specific properties of the MRI data and then feeds the results into the classified grades. The proposed model succeeds in obtaining high accuracy (98.49%) using a two-dimensional SVM, and shows great promise for improving tumor grading (Raisi-Nafchi et al., 2025b). These authors also applied SDEs along with a statistical modeling framework to analyze MRI images and predict the isocitrate dehydrogenase mutation status in grade IV gliomas (Raisi-Nafchi et al., 2025a).

A novel stochastic model is introduced in (MacIver and Shaheen 2024) by adding Brownian diffusion to a deterministic reaction-diffusion model used in biological invasion applications. By integrating ideas from graph theory and Bayesian statistics, this model is used to simulate the propagation of misfolded proteins in Alzheimer's disease, a neuro-degenerative disorder characterized by cognitive decline. The corresponding SDE formulation is given by

dci=(j=1NLijcj+αci[1ci])dt+σ2cidW(t),

where ci denotes the misfolded protein concentration in node i, and α represents the conversion rate. This stochastic version contributes to more realistic simulations of the disease and captures the variability in misfolded protein concentration across different brain regions over time.

By adding Brownian noise to a biologically plausible deterministic (Izhikevich) model and utilizing machine-learning techniques, a neural SDE is proposed by (Sato et al. 2025) for modeling heterogeneous spike patterns, designing external current and controlling spike-timing. External currents are iteratively trained to minimize both firing mismatches and timing errors using stochastic gradient descent and back-propagation techniques. Simulations indicate successful spike control including regular spiking, bursting, and fast spiking patterns.

A stochastic model based on Langevin equations and Fokker-Planck equation is introduced in (Staii 2025) to describe random axonal motion, connect the drift to a biophysical actin-myosin-clutch model, and investigate the stability for estimating the fluctuation spectra and oscillatory regimes.

From a diffusive approximation of the mean-field limit of a system of SDEs that model the collective behavior of an ensemble of neurons, (Carrillo et al. 2013) examined the global existence of classical solutions to the initial boundary-value problem of a nonlinear parabolic equation. The study derives a nonlocal Fokker-Planck equation with nonlinear dependence on the coefficients that includes the probability flux across the boundary. Then the Fokker-Planck equation with nonlinear boundary conditions is transformed into a Stefan-like free boundary problem with a Dirac-delta source term.

Some other multidimensional stochastic (integrate-and-fire) neuron models discuss spike-triggered adaptation in non-equilibrium settings (Schwalger and Lindner, 2015), subthreshold resonance under weak and high noise settings (Brunel et al., 2003), colored noise for temporally correlated spike inputs (Schwalger et al., 2015), noisy limit cycles and quasi-cycles (Wallace et al., 2011), and dimension reduction techniques that include spectral decomposition of the corresponding Fokker-Planck operator (Augustin et al., 2017).

In conclusion, these studies highlight significant recent advances for stochastic models in the neurosciences. The integration of stochasticity in neuroscience research provides a more realistic approach, enabling models to better explain the behavior of neurons and brain activity.

Modeling the interaction of a large population of neurons becomes challenging as the complexity of the neural system grows. Although SDEs provide better insight into the dynamics of a single-neuron or a neural network, it is computationally difficult to use a very large system of SDEs in modeling the entire brain. To overcome this issue, the mean-field PDEs are utilized. The mean-field PDEs model large-scale neural interactions and provide a mesoscopic/macroscopic description of them. These equations connect the gaps between the models for individual neurons and those for collective groups of neurons observed in neural systems. Their details are discussed in the next section.

4. Mean-field PDE models in neuroscience

Mean-field PDE models provide mesoscopic/macroscopic descriptions of large neuronal networks by following probability or population densities over suitable state variables instead of tracking individual neurons. From a modeling viewpoint, they arise as limits of large interacting stochastic systems and can often be interpreted as nonlinear Fokker-Planck or transport equations of McKean-Vlasov type (Fasoli, 2013); see also (Carrillo and Roux 2025) for a recent comprehensive survey of such models. Related mesoscopic reductions linking spiking-network dynamics to population-level descriptions include the dynamic mean-field approach of Mattia and Del Giudice, spike-to-rate reductions such as those of Ostojic and Brunel, and fluctuation-driven population reductions developed more recently by (Goldobin et al. 2021); (Mattia and Del Giudice 2002, 2004); (Ostojic and Brunel 2011).

In this section, we first recall the mean-field construction in a unified framework, and then discuss four PDE families that have emerged in neuroscience: parabolic Fokker-Planck equations for membrane-potential or rate variables (Section 4.2), hyperbolic age- or time-elapsed structured equations (Section 4.3), kinetic Fokker-Planck systems for voltage-conductance and FitzHugh-Nagumo-type dynamics (Section 4.4), and stochastic neural field PDEs for spatially extended cortical activity (Section 4.5).

4.1. Theoretical background: from microscopic networks to mean-field PDEs

Consider a network of N neurons. For each neuron i ∈ {1, …, N} we denote by Zi(t) its state at time t, taking values in a space E (e.g., E = ℝ for a membrane-potential, E = ℝ2 for a voltage-recovery pair, or E = ℝ+×ℝ for an age-potential pair). A broad class of interacting stochastic models can be written as

dZi(t)=b(Zi(t),μtN)dt+σ(Zi(t),μtN)dWi(t)+dJi(t),           i=1,,N,

where μtN=1Nj=1NδZj(t) is the empirical measure of the network, b and σ are drift and diffusion coefficients, Wi's are independent Brownian motions (or other noise sources), and Ji represents jump terms such as spikes and resets. In point-process formulations, the dynamics may instead be specified through stochastic intensities or hazard functions driven by μtN.

In the mean-field regime of a large, weakly coupled network, one expects a law-of-large-numbers behavior as N → ∞: the empirical measure μtN converges to a deterministic probability measure μt on E, and the dynamics of a typical neuron converges to a nonlinear (McKean-Vlasov) process Z(t) with law μt,

dZ(t)=b(Z(t),μt(dt+σ)Z(t),μt(dW(t)+dJ(t),       μt=L(Z(t)).

when Z(t) has continuous paths and a density p(t, z), its law solves a nonlinear Fokker-Planck or transport equation of McKean-Vlasov type. If some coordinates of Z are ages (time since last spike), or if jumps and reset mechanisms are present, transport terms and renewal-type boundary conditions appear, leading to age-structured or kinetic PDEs. Rigorous derivations rely on the theory of interacting particle systems and the propagation of chaos (Carrillo and Roux, 2025; Fasoli, 2013).

4.2. Fokker-Planck models

Fokker-Planck equations arise as forward Kolmogorov equations for stochastic neuron models and noisy neural-mass systems. They describe the time evolution of a density p(t, x) for a process

dXt=b(Xt,t)dt+σ(Xt,t)dWt,

with Xtd encoding, for example, a membrane-potential or a low-dimensional rate/decision variable. If Xt admits a density, then p satisfies

tp(t,x)=-x·(b(x,t)p(t,x))+12x·(a(x,t)xp(t,x)), a=σσ. (6)

when b or a depend on p through a self-consistent firing rate or mean-field, Equation 6 becomes a nonlinear Fokker-Planck equation of McKean-Vlasov type.

An important example outside the classical spiking setting is provided by Kuramoto-type oscillator networks, where population-level nonlinear Fokker-Planck equations have been used to describe synchronization phenomena in cortical oscillations (Breakspear et al., 2010).

4.2.1. Network noisy leaky integrate-and-fire (NNLIF) models

In a homogeneous population of NNLIF neurons, let Isyn, i(t) denote incoming synaptic current for neuron i at time t. Then, the subthreshold voltage dynamics of neuron i is governed by an SDE of the form

dVi(t)=(-1τm(Vi(t)-Vrest)+Isyn,i(t))dt+σdWi(t),

with threshold and reset at Vth and Vreset, respectively. In a mean-field approximation, Isyn, i(t) is replaced by an effective input depending on the population firing rate S(t) and a connectivity parameter J, possibly with synaptic delay. In the limit, as N → ∞, the membrane-potential density p(t, v) satisfies a nonlinear Fokker-Planck equation in (Vmin, Vth) of the form

tp(t,v)+v([-1τm(v-Vrest)+μ(t)]p(t,v))-σ22vv2p(t,v)=0,

where drift μ(t) is determined by network activity S(t), and boundary conditions encode threshold crossing, a flux-based definition of the firing rate S(t), and reset at Vreset.

An early analytical study of sparse inhibitory integrate-and-fire networks identified a sharp transition from stationary to oscillatory global activity in the low-firing-rate regime, providing a classical reference point for later population-level analyses of oscillations (Brunel and Hakim, 1999). Subsequent analytical studies of NNLIF equations have clarified several additional key regimes. The basic model admits zero, one, or two stationary states depending on connectivity, and in fully excitatory networks, one can prove finite-time blow-up of the firing rate under suitable conditions on the initial data, highlighting the delicate balance between excitation and noise (Cáceres et al., 2011). Extensions, including a refractory state and randomness in the firing potential, show that the blow-up scenario persists and that spontaneous activity can arise in parameter regimes where the deterministic-threshold model would blow up (Cáceres and Perthame, 2014). For inhibitory or weakly connected networks, and in particular for delayed NNLIF systems, one can construct global-in-time solutions and prove convergence toward the stationary distributions by exploiting a discrete sequence of pseudo-equilibria that captures the long-time behavior for large transmission delays (Cáceres et al., 2024). These results link the nonlinear Fokker-Planck structure of NNLIF models to phenomena such as plateau states and delay-induced oscillations in population activity.

4.2.2. Fokker-Planck models for decision, rate, and evoked population dynamics

Fokker-Planck equations also arise for low-dimensional decision and rate models derived from stochastic neural-mass systems. In two-choice decision models, a stochastic Wilson-Cowan-type system with a drift field F and noise amplitude ε leads, at the level of probability density p(t, x) of the two population activities x ∈ Ω⊂ℝ2, to a Fokker-Planck equation of the form

tp(t,x)=-x·(F(x)p(t,x))+ε2Δxp(t,x),  xΩ2,

supplemented with no-flux boundary conditions on ∂Ω that enforce bounded firing rates and conservation of total probability (Carrillo et al., 2011). For small noise, the dynamics exhibit metastability: solutions rapidly concentrate along a slow manifold connecting an unstable “spontaneous” state to two stable “decision” states, and only on much longer time scales relax toward equilibrium. Under suitable inward-pointing assumptions on the drift field F, one can prove the existence, uniqueness, and positivity of a stationary density and convergence of solutions toward this equilibrium by combining Krein-Rutman theory with generalized relative entropy methods (Carrillo et al., 2011). The stationary distribution encodes decision probabilities through the mass near the different attractors, and the observed concentration along a slow manifold suggests reduced one-dimensional drift-diffusion approximations for the decision variable (Carrillo et al., 2011).

Furthermore, in (Harrison et al. 2005), a population density approach is proposed to model the evoked response potential with random arrivals at peak times, and the dispersive effects of stochastic forces are analyzed to compare equilibrium response rates of stochastic versus deterministic models. Taking expectations over the probability density that represents an ensemble of trajectories, the mean membrane potential and firing rates are studied. Model parameters are estimated via a Bayesian approach, and the corresponding Fokker-Planck equations are solved via discretizations.

4.3. Age- and time-elapsed structured PDEs

In computational neuroscience, age-structured and time-elapsed formulations, often referred to as refractory-density equations, provide an alternative mean-field description in which the state variable is the time-elapsed since the last spike rather than the membrane-potential (Chizhov and Graham, 2007, 2008; Gerstner, 2000, 2001; Schwalger and Chizhov, 2019). At the microscopic level, each neuron carries an age variable Ai(t) ≥ 0 that increases at unit speed between spikes and is reset to zero at spike-times,

Ȧi(t)=1,

with spike intensity (hazard rate) λ(Ai(t),μtN) depending on age and population activity. In the mean-field limit, the age distribution n(t, a) satisfies

tn(t,a)+an(t,a)+λ(a,F[n(t,·)])n(t,a)=0,  a>0,

together with the renewal boundary condition

n(t,0)=0λ(a,F[n(t,·)])n(t,a)da.

These equations naturally encode refractoriness through the age variable and can be extended to incorporate adaptation and fatigue through fragmentation-type terms, as well as transmission delays through suitable coupling kernels (Pakdaman et al., 2009, 2014; Perthame et al., 2025). Analytical studies of age- and time-elapsed structured models have shown that, depending on the connectivity regime and on heterogeneity or adaptation mechanisms, one can observe rigorous desynchronisation and exponential relaxation to steady states in weakly coupled or heterogeneous networks (Kang et al., 2015; Pakdaman et al., 2009, 2014), as well as self-sustained or delay-induced periodic solutions in strongly nonlinear and delayed regimes (Pakdaman et al., 2009; Perthame et al., 2025).

4.4. Kinetic PDEs for voltage-conductance and FitzHugh-Nagumo dynamics

Kinetic mean-field models retain several continuous state variables per neuron, such as a membrane-potential and one or more conductance or recovery variables, together with noise and mean-field coupling. They aim to bridge biophysically detailed single-neuron dynamics with more aggregated population models.

A paradigmatic example is the voltage-conductance kinetic equation introduced by Perthame and Salort for integrate-and-fire networks with slow synaptic conductances (Perthame and Salort, 2013). In the mean-field limit the joint density p(t, v, g) of voltage v and conductance g satisfies a kinetic Fokker-Planck equation of the form

tp(t,v,g)+v(f(v,g)p(+g(h(v,g,Φ[p])p)-σV22vv2p                     -σG22gg2p=0,

where f and h encode the passive and synaptic currents, Φ[p] is a nonlocal functional of p linked to the population firing rate, and the diffusion term acts only on the conductance variable g, yielding a hypoelliptic structure. The equation is complemented by threshold-reset conditions in v and suitable boundary behavior as g → 0 and g → ∞. Perthame and Salort established global a priori bounds on the density and the firing rate, analyzed the existence of stationary solutions in weakly connected regimes, and highlighted a “paradox” with respect to parabolic NNLIF equations, which can exhibit finite-time blow-up whereas the kinetic model remains globally bounded (Perthame and Salort, 2013).

Another prominent example is the kinetic FitzHugh-Nagumo model, in which the probability density of the state (V, W) satisfies a nonlinear Fokker-Planck equation associated with mean-field-coupled FitzHugh-Nagumo dynamics with noise (Mischler et al., 2016). The associated density p(t, v, w) satisfies a hypoelliptic nonlocal kinetic Fokker-Planck equation on ℝ2, where the drift combines the cubic FitzHugh-Nagumo vector field with a McKean-Vlasov term depending on the average voltage. Mischler et al. proved the global well-posedness of this equation, the existence of non-trivial stationary probability densities, and uniqueness together with exponential stability of the stationary state in a weakly nonlinear (small connectivity) regime (Mischler et al., 2016). Moreover, their spectral analysis and numerical simulations show that increasing connectivity can induce symmetry breaking and give rise to time-periodic collective oscillations, illustrating how kinetic PDEs capture transitions between asynchronous and oscillatory network activity (Mischler et al., 2016).

4.5. Stochastic neural field PDEs

Neural field models describe coarse-grained activity in spatially distributed neuronal populations, typically cortical sheets or grid-cell networks. Classical formulations are integro-differential equations for a field u(t, x) defined on a spatial domain, with nonlocal coupling via a synaptic kernel. A classic review of deterministic Wilson-Cowan/Amari-type neural field theories, including their mathematical analysis and pattern-forming mechanisms, is given by Coombes (Coombes, 2005).

Building on these classical formulations, stochastic neural field models incorporate noise either directly at the field level or via fluctuations in local populations and exhibit spatial patterns such as traveling waves, bumps, and grid-like tessellations. From a mean-field PDE perspective, stochastic neural fields can often be recast as nonlinear Fokker-Planck systems for the local probability density of activity variables. Recent work in (Carrillo et al. 2023, 2024) develops this framework for spatially coupled neural populations with stochastic dynamics.

In (Carrillo et al. 2023), they analyze a four-population model for grid cells in the medial entorhinal cortex on a toroidal domain, in which spatially periodic patterns emerge from a homogeneous state through a noise-driven bifurcation: small but finite noise destabilizes the spatially homogeneous equilibrium and selects hexagonal grid-like patterns of activity, in line with experimental observations. The analysis combines neural-field pattern-formation mechanisms with a nonlinear, nonlocal Fokker-Planck structure and uses equivariant bifurcation theory on the torus to classify branches of patterned solutions and their linear stability (Carrillo et al., 2023).

Building on this model, (Carrillo et al. 2024) develops a rigorous PDE theory for the associated nonlinear Fokker-Planck system. By reformulating the equation as a Stefan-type free boundary problem, the authors prove the local and global existence of classical solutions under suitable decay and Lipschitz assumptions, derive representation formulas, and establish nonlinear asymptotic stability of stationary states via a generalized relative entropy functional. These results complement the bifurcation analysis of (Carrillo et al. 2023) by showing that the spatially structured grid-cell patterns that emerge at the PDE level are dynamically stable over long times.

Beyond neural field Fokker-Planck systems, SPDEs have also been used to model cortical activity more directly. For example, Steyn-Ross- type SPDE models are used in (Kramer et al. 2005) to predict oscillations and traveling waves in response to the increased subcortical excitation, with a simplified ODE version showing bifurcations and oscillatory behavior. Comparison with human seizure data reveals agreement in peak frequency and propagation speed, suggesting that seizures may represent the formation of a pathological pattern in the brain (Kramer et al., 2005). Another example is a feedback control model of (Lopour and Szeri 2010) via an SPDE model of the human cortex for stopping seizures. It ensures accurate simulations by handling sharp changes in the brain's activity near electrodes. It is shown that a new electrode measurement model and control algorithm effectively suppress seizure-like activity while minimizing the risk of cortical damage (Lopour and Szeri, 2010).

A numerical solution method for stochastic Fitzhugh-Nagumo PDEs is developed in (Uma et al. 2025) for modeling neuro-biological dynamics as

ut=uxx+u(u-a)(1-u)+σuWt.

with initial and boundary conditions

u(0,x)=u0(x),0x1u(t,0)=g0(t),u(t,1)=g1(t),0tT

where Wt is a Wiener process and Ẇ is the corresponding white noise process.

To summarize, mean-field PDE models in neuroscience encompass parabolic, hyperbolic, and kinetic equations that systematically connect microscopic stochastic descriptions of neural networks to mesoscopic and macroscopic dynamics and provide a flexible framework for analyzing population-level neural phenomena.

4.6. Numerical methods for mean-field PDE models

The mean-field PDE models described in this section are rarely solvable in closed form and are typically investigated using numerical schemes tailored to their specific structure. Two broad approaches can be distinguished: particle-based methods, which approximate the underlying stochastic processes, and deterministic solvers for the PDEs themselves.

A first class of methods relies on Monte Carlo simulation of the microscopic SDE or point-process description and empirical estimation of the associated density (René et al., 2020; Schwalger et al., 2017). This is natural for NNLIF and kinetic models, where the law of a McKean-Vlasov process is approximated by a large ensemble of interacting particles. Such schemes are straightforward to implement but can be computationally expensive when rare events or fine spatial resolution are required, and are less convenient for systematic bifurcation analysis.

Deterministic schemes approximate the PDEs directly. For the NNLIF equation and its variants, finite-difference and finite-volume methods based on high-order upwind or WENO discretisations have been used to resolve sharp gradients and blow-up-like phenomena in voltage or conductance distributions, often combined with strong-stability-preserving Runge-Kutta time integrators. In addition, structure-preserving discretisations of Fokker-Planck type, such as the Chang-Cooper and Scharfetter-Gummel schemes, have been adapted to these models in order to guarantee discrete conservation of mass and non-negativity of the solution and, in the linear case, to reproduce the correct equilibrium state and dissipation of relative entropy (Carrillo and Roux, 2025; Chang and Cooper, 1970).

For age-structured and time-elapsed equations, upwind finite-volume methods on the age variable, sometimes coupled with splitting strategies in time, provide efficient approximations of transport and renewal terms. In kinetic voltage-conductance and FitzHugh-Nagumo systems, semi-implicit and operator-splitting schemes are often employed to handle stiffness and degenerate diffusion, while maintaining stability in regimes with strong coupling or large delays.

More recently, spectral and discontinuous Galerkin methods, as well as multiscale hybrid schemes combining particle and PDE solvers, have been proposed for stochastic neural field PDEs and related Fokker-Planck systems. These approaches aim to accurately capture both the spatial pattern formation and the long-time dynamics (including noise-driven bifurcations) in high-dimensional settings. Developing robust, structure-preserving, and computationally efficient numerical methods for these PDE models remains an active area of research and are crucial to connecting the mathematical theory to realistic network sizes and experimental data.

5. Discussion and open problems

In this review, we have organized differential-equation-based models in neuroscience into three interconnected layers: deterministic ODE models at the single-neuron, population and whole-brain scales; stochastic models that extend these ODE frameworks to incorporate intrinsic and synaptic variability; and mean-field PDE models that describe the evolution of probability or population densities in large networks (Carrillo and Roux, 2025; Dayan and Abbott, 2001; Ermentrout and Terman, 2010; Gerstner et al., 2014). This hierarchy links microscopic spiking to mesoscopic and macroscopic behavior and reveals common mathematical structures across modeling approaches.

At the deterministic level, conductance-based and reduced single-neuron models, neural-mass systems and whole-brain ODE networks provide mechanistic templates from which both stochastic and mean-field descriptions can be derived (Breakspear, 2017; Deco et al., 2008; FitzHugh, 1961; Hodgkin and Huxley, 1990; Ritter et al., 2013; Wilson and Cowan, 1972). They allow one to analyze excitability types, multistability and oscillations in relatively low-dimensional state spaces, to connect biophysical parameters to qualitative changes in dynamics, and to interpret mesoscopic and macroscopic recordings via fitted population models. Stochastic extensions (noisy ODEs, SDEs, point-process and renewal models) enrich these templates by capturing randomness in spike-timing, synaptic input and network connectivity, and they serve as microscopic starting points for mean-field limits. Because variability and noise are central to neural function and cognition, the stochastic layer receives particular attention in this review. SDE-based models have been successfully used to reproduce neural firing patterns, to study the impact of fluctuating inputs and synaptic plasticity, and to capture large-scale neural variability in electro-physiological (Ditlevsen and Samson, 2016; Jansen and Rit, 1995; René et al., 2020; Schwalger et al., 2017) and imaging data (Jirsa et al., 2017; Ritter et al., 2013).

On the other hand, mean-field PDE models-Fokker-Planck, age-structured, kinetic and stochastic neural field equations-encode the collective dynamics of large populations in terms of densities over membrane-potentials, elapsed times or conductance and recovery variables (Cáceres et al., 2011; Carrillo and Roux, 2025; Chizhov and Graham, 2007; Gerstner, 2000; Mischler et al., 2016; Pakdaman et al., 2013; Schwalger et al., 2017). Across these levels, similar dynamical phenomena emerge: multistability, oscillations, synchronization and pattern formation, as well as regimes of pathological activity such as seizure-like events, blow-up and delay-induced instabilities. Taken together, ODE, SDE and PDE models have made significant contributions to our understanding of neural dynamics and remain promising tools for advancing our understanding of brain function, neural disorders and the development of novel computational methods in neuroscience.

Although the basic mathematical picture is well established for several canonical models, many questions remain open. At the stochastic level, network-scale SDE models are often high-dimensional and driven by complex, state-dependent and non-Gaussian noise sources, yet most existing work still relies on relatively simple white-noise approximations. Developing statistically tractable and computationally efficient methods to infer realistic noise structures and parameters from limited, noisy data (e.g. subsampled spikes or imaging signals) remains a major challenge. Recent work on neural ODE and SDE architectures has also emphasized open problems concerning their expressive power, identifiability and numerical robustness: it is not yet fully understood which classes of neural dynamics these models can represent, how to ensure stable training in stiff and multiscale regimes, or how to interpret the learned continuous-time vector fields in biological terms (Worsham and Kalita, 2025). From a methodological perspective, it would be valuable to design parameter-estimation and model-selection frameworks that combine mechanistic differential equation models with modern machine-learning tools (e.g. amortized inference, variational autoencoders, or neural surrogate models), in a way that is robust to measurement noise, yields uncertainty quantification, and respects biophysical constraints.

Similar challenges arise when combining differential equation models with machine-learning tools such as physics-informed neural networks (PINNs). PINN-based methods have been proposed for forward and inverse problems in Fokker-Planck and mean-field PDEs, but rigorous error estimates, convergence guarantees and identifiability results are still largely restricted to low-dimensional or linear settings. Extending these theoretical foundations to nonlinear, high-dimensional neural PDEs, and understanding how much data are needed to learn reliable models from particle or spike-train observations, are important open questions. More broadly, there is considerable scope for work on multiscale integration-connecting ODE, SDE and PDE descriptions within unified inference pipelines-on control and optimization of high-dimensional stochastic and mean-field models, and on incorporating more realistic biophysical constraints and noise statistics into differential-equation-based models of neural systems.

At this point, we believe that artificial intelligence and machine-learning techniques may enable more precise and efficient parameter estimation for stochastic neural models. Since these approaches can learn complex probabilistic structure directly from data, they have the potential to substantially accelerate parameter estimation, especially when combined with adaptive optimization and uncertainty quantification. However, a key challenge is to ensure that the inferred parameters remain biologically interpretable and robust in real-data settings.

5.1. Multiscale integration and model comparison

Although ODE-/SDE-/PDE-based models can often be derived from one another in suitable limiting regimes, systematic cross-scale comparisons within the same study are not yet widespread, despite influential examples that provide extensive benchmarks between derived mesoscopic/macroscopic dynamics and simulations of the underlying microscopic models (Greven et al., 2026; René et al., 2020; Schwalger et al., 2017). Beyond these benchmark studies, many correspondences are known at the level of formal derivations: for instance, NNLIF Fokker-Planck equations and time-elapsed age-structured models can both be obtained from networks of integrate-and-fire neurons under different scalings or choices of state variables (Cáceres et al., 2024, 2011; Carrillo et al., 2011); kinetic FitzHugh-Nagumo equations can be linked to both microscopic excitable neurons and low-dimensional rate models (Mischler et al., 2016; Montbrió et al., 2015). A more systematic analysis of these correspondences could clarify when different reduced descriptions are genuinely equivalent, when they capture complementary aspects of the same dynamics, and when they diverge.

From a practical point of view, such comparisons are also important for model selection: given a particular dataset (e.g., spike trains, local field potentials, EEG/MEG or fMRI time series), under what conditions should one prefer a neural-mass ODE, a Fokker-Planck equation, an age-structured model or a kinetic PDE? Developing quantitative criteria and robust procedures for comparing these models, either through likelihood-based approaches or via carefully designed summary statistics, remains an open challenge.

5.2. Data assimilation and parameter inference in PDE-based models

While there is a wide variety of well-established methods for parameter inference in ODE and SDE models, robust and identifiable inference is still nontrivial in multiscale, partially observed, and strongly nonlinear neural systems (Ditlevsen and Samson, 2016; Moye and Diekman, 2018; René et al., 2020). For PDE-based neural models, corresponding inference and data-assimilation frameworks are comparatively less standardized and less routinely applied in neuroscience (Harrison et al., 2005). In principle, population densities in Fokker-Planck, age-structured or kinetic equations can be linked to observable quantities such as firing rates, power spectra or spatial patterns; in practice, the inverse problem of estimating parameters (connectivity strengths, delays, noise levels, adaptation time scales) from such observables remains highly challenging (Ditlevsen and Samson, 2016; Harrison et al., 2005; René et al., 2020).

Several methodological avenues are available, such as variational data assimilation, adjoint-based gradient methods, Bayesian inference, and particle-based approximations of PDE dynamics, but they have not yet been systematically adapted to neural PDE models. Developing scalable, well-posed parameter-estimation frameworks for these equations and quantifying the identifiability of key parameters is an important direction for making mean-field PDEs more directly usable in data-driven neuroscience.

5.3. Control, modulation and optimal interventions

Most of the work reviewed here focuses on the autonomous dynamics of neural systems. However, from both theoretical and applied perspectives, questions of control and modulation are increasingly relevant, for example, in the context of neuromodulation therapies, seizure suppression, or targeted stimulation (Acharya et al., 2022; Chouzouris et al., 2021; Schiff, 2011).

At the ODE level, there is a rich literature on control of neural-mass and whole-brain models (Basu et al., 2018; Chouzouris et al., 2021; Wang et al., 2016). Moreover, stochastic optimal control techniques (based on dynamic programming principle or martingale duality techniques) are well established (Chen et al., 2017). Even though there may still be some room to extend currently known methods to interventions in neurodynamics, there is an increasing trend to integrate theoretical results with both machine-learning and Monte Carlo methods. However, control questions for PDE-based mean-field models are only beginning to be explored. Natural problems include: suppressing blow-up in NLIF equations by appropriately shaping external input or connectivity; steering age-structured networks between asynchronous and oscillatory regimes; and controlling kinetic equations to stabilize or destabilize particular patterns. These tasks naturally lead to optimal control or feedback control problems governed by nonlinear Fokker-Planck, transport, or kinetic equations, where control theory and numerical analysis still have much to contribute.

5.4. Analytical challenges and model generalizations

On the analytical side, several structural questions remain open even for relatively classical models (Carrillo and Roux, 2025). For NNLIF equations, a complete characterization of all possible long-time behaviors (stationary states, periodic or more complex attractors) as connectivity, delay, and noise vary is still incomplete, especially in multi-population or spatially extended settings (Cáceres et al., 2011; Cáceres and Perthame, 2014; Cáceres et al., 2024). For age-structured models, strongly nonlinear firing-rate functions and distributed delay kernels can generate intricate dynamics whose rigorous understanding is only partial (Pakdaman et al., 2009, 2013, 2014; Perthame et al., 2025; Schwalger and Chizhov, 2019). For kinetic equations, hypoellipticity, degenerate diffusion, and unbounded coefficients pose technical difficulties, and the study of bifurcations and pattern formation is still in an early stage (Mischler et al., 2016; Perthame and Salort, 2013).

At the same time, several natural model extensions have not yet been fully incorporated into the mean-field PDE framework. These include synaptic plasticity (both short- and long-term), structural connectivity changes, heterogeneous populations with distributed parameter sets, and more realistic dendritic and network geometries (Carrillo et al., 2023; Ritter et al., 2013; Robert and Vignoud, 2021; Schmutz et al., 2023; Schwalger et al., 2017). Extending existing analytical results to such generalizations or identifying reduced PDE descriptions that remain tractable in their presence, represents a substantial but important challenge.

5.5. Links to machine-learning and neural differential equations

Finally, there is growing interest in combining mechanistic differential equation models with machine-learning techniques. At the ODE and SDE levels, neural ODEs, neural SDEs and universal differential equations offer flexible frameworks in which parts of the dynamics are specified mechanistically while others are learned from data. However, combining these models with Bayesian and statistical methods, such as (conditional) maximum-likelihood estimation and EM-type approaches within hidden Markov frameworks, becomes challenging, especially for mesoscopic inference when some variables are latent and only partially observed (Ditlevsen and Samson, 2016; Laing and Lord, 2009; René et al., 2020). Extending these ideas to PDE-based neural models could enable hybrid approaches where, for example, the structure of a Fokker-Planck or age-structured equation is retained but certain closure terms, effective drift components, or kernel functions are learned from data.

Physics-informed neural networks and related mesh-free methods provide another route to approximate solutions of neural PDEs and to solve inverse problems, potentially reducing the cost of classical discretisations in high-dimensional or parameter-rich settings. However, questions of expressivity, training stability, and error control for such methods in the context of nonlinear Fokker-Planck, transport, and kinetic equations are far from settled, and deserve careful investigation.

5.6. Outlook

Differential equation models have played a central role in theoretical neuroscience for decades, and the landscape is now broad enough to cover deterministic dynamics, stochastic processes, computational methods and mean-field PDEs in a unified way. The next phase is likely to be driven by tighter integration of these modeling layers with experimental data and by closer interaction between theoretical analysis, numerics/simulations, optimization/control, and machine-learning components. Progress along these lines could yield not only a deeper mathematical understanding of neural dynamics across scales, but also more effective tools for interpreting data, designing interventions, and linking microscopic mechanisms to mesoscopic and macroscopic brain activities.

Funding Statement

The author(s) declared that financial support was not received for this work and/or its publication.

Footnotes

Edited by: Maurizio Mattia, Italian National Institute of Health (ISS), Italy

Reviewed by: Magnus Richardson, University of Warwick, United Kingdom

Tilo Schwalger, Technical University of Berlin, Germany

Author contributions

CÇ: Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. JP: Conceptualization, Funding acquisition, Investigation, Project administration, Supervision, Visualization, Writing – review & editing, Methodology. Bİ: Conceptualization, Formal analysis, Investigation, Methodology, Resources, Validation, Visualization, Writing – original draft, Writing – review & editing. AP-D: Conceptualization, Formal analysis, Investigation, Methodology, Resources, Validation, Visualization, Writing – original draft, Writing – review & editing. SA: Conceptualization, Formal analysis, Investigation, Methodology, Resources, Validation, Visualization, Writing – original draft, Writing – review & editing. MÖ: Investigation, Writing – original draft, Writing – review & editing.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  1. Acharya G., Ruf S. F., Nozari E. (2022). Brain modeling for control: a review. Front. Control Eng. 3:1046764. doi: 10.3389/fcteg.2022.1046764 [DOI] [Google Scholar]
  2. Augustin M., Ladenbauer J., Baumann F., Obermayer K. (2017). Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: comparison and implementation. PLoS Comput. Biol. 13:e1005545. doi: 10.1371/journal.pcbi.1005545 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aviel Y., Gerstner W. (2006). From spiking neurons to rate models: a cascade model as an approximation to spiking neuron models with refractoriness. Phys. Rev. E—Statist. Nonlinear Soft Matter Phys. 73:051908. doi: 10.1103/PhysRevE.73.051908 [DOI] [PubMed] [Google Scholar]
  4. Basu I., Crocker B., Farnes K., Robertson M. M., Paulk A. C., Vallejo D. I., et al. (2018). A neural mass model to predict electrical stimulation evoked responses in human and non-human primate brain. J. Neural Eng. 15:066012. doi: 10.1088/1741-2552/aae136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Breakspear M. (2017). Dynamic models of large-scale brain activity. Nat. Neurosci. 20, 340–352. doi: 10.1038/nn.4497 [DOI] [PubMed] [Google Scholar]
  6. Breakspear M., Heitmann S., Daffertshofer A. (2010). Generative models of cortical oscillations: neurobiological implications of the kuramoto model. Front. Hum. Neurosci. 4:190. doi: 10.3389/fnhum.2010.00190 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brunel N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8, 183–208. doi: 10.1023/A:1008925309027 [DOI] [PubMed] [Google Scholar]
  8. Brunel N., Hakim V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput. 11, 1621–1671. doi: 10.1162/089976699300016179 [DOI] [PubMed] [Google Scholar]
  9. Brunel N., Hakim V., Richardson M. (2003). Firing-rate resonance in a generalized integrate-and-fire neuron with subthreshold resonance. Phys. Rev. E 67:051916. doi: 10.1103/PhysRevE.67.051916 [DOI] [PubMed] [Google Scholar]
  10. Brunel N., Wang X.-J. (2001). Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition. J. Comput. Neurosci. 11, 63–85. doi: 10.1023/A:1011204814320 [DOI] [PubMed] [Google Scholar]
  11. Burkitt A. N. (2001). “Synchronization of the neural response to noisy periodic synaptic input in a balanced leaky integrate-and-fire neuron with reversal potentials,” in IJCNN'01. International Joint Conference on Neural Networks (IEEE: ), 22–27. [DOI] [PubMed] [Google Scholar]
  12. Cáceres M. J., Ca nizo J. A., Ramos-Lora A. (2024). Sequence of pseudoequilibria describes the long-time behavior of the nonlinear noisy leaky integrate-and-fire model with large delay. Phys. Rev. E 110:064308. doi: 10.1103/PhysRevE.110.064308 [DOI] [PubMed] [Google Scholar]
  13. Cáceres M. J., Carrillo J. A., Perthame B. (2011). Analysis of nonlinear noisy integrate & fire neuron models: blow-up and steady states. J. Mathemat. Neurosci. 1:7. doi: 10.1186/2190-8567-1-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Cáceres M. J., Perthame B. (2014). Beyond blow-up in excitatory integrate and fire neuronal networks: refractory period and spontaneous activity. J. Theor. Biol. 350, 81–89. doi: 10.1016/j.jtbi.2014.02.005 [DOI] [PubMed] [Google Scholar]
  15. Carrillo J. A., Cordier S., Mancini S. (2011). A decision-making fokker-planck model in computational neuroscience. J. Math. Biol. 63, 801–830. doi: 10.1007/s00285-010-0391-3 [DOI] [PubMed] [Google Scholar]
  16. Carrillo J. A., González M. D. M., Gualdani M. P., Schonbek M. E. (2013). Classical solutions for a nonlinear fokker-planck equation arising in computational neuroscience. Commun. Part. Different. Equat. 38, 385–409. doi: 10.1080/03605302.2012.747536 [DOI] [Google Scholar]
  17. Carrillo J. A., Roux P. (2025). Nonlinear partial differential equations in neuroscience: from modelling to mathematical theory. Math. Models and Meth. in Appl. Sci. 35, 403–584. doi: 10.1142/S0218202525400044 [DOI] [Google Scholar]
  18. Carrillo J. A., Roux P., Solem S. (2023). Noise-driven bifurcations in a nonlinear fokker-planck system describing stochastic neural fields. Physica D: Nonlinear Phenomena 449:133736. doi: 10.1016/j.physd.2023.133736 [DOI] [Google Scholar]
  19. Carrillo J. A., Roux P., Solem S. (2024). Well-posedness and stability of a stochastic neural field in the form of a partial differential equation. Journal des Mathématiques Pures et Appliquées 193:103623. doi: 10.1016/j.matpur.2024.103623 [DOI] [Google Scholar]
  20. Chang J., Cooper G. (1970). A practical difference scheme for fokker-planck equations. J. Comput. Phys. 6, 1–16. doi: 10.1016/0021-9991(70)90001-X [DOI] [Google Scholar]
  21. Chen Y., Deng Y., Yue S., Deng C. (2017). Optimal stochastic control problem for general linear dynamical systems in neuroscience. Adv. Mathem. Phys. 2017:8730859. doi: 10.1155/2017/8730859 [DOI] [Google Scholar]
  22. Chizhov A. V., Graham L. J. (2007). Population model of hippocampal pyramidal neurons, linking a refractory density approach to conductance-based neurons. Phys. Rev. E—Statist. Nonlinear Soft Matter Phys. 75:011924. doi: 10.1103/PhysRevE.75.011924 [DOI] [PubMed] [Google Scholar]
  23. Chizhov A. V., Graham L. J. (2008). Efficient evaluation of neuron populations receiving colored-noise current based on a refractory density method. Phys. Rev. E—Statist. Nonlinear Soft Matter Phys. 77:011910. doi: 10.1103/PhysRevE.77.011910 [DOI] [PubMed] [Google Scholar]
  24. Chouzouris T., Roth N., Cakan C., Obermayer K. (2021). Applications of optimal nonlinear control to a whole-brain network of fitzhugh-nagumo oscillators. Phys. Rev. E 104:024213. doi: 10.1103/PhysRevE.104.024213 [DOI] [PubMed] [Google Scholar]
  25. Chow C. C., Buice M. A. (2015). Path integral methods for stochastic differential equations. J. Mathemat. Neurosci. 5:8. doi: 10.1186/s13408-015-0018-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Compte A., Brunel N., Goldman-Rakic P. S., Wang X.-J. (2000). Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb. Cort. 10, 910–923. doi: 10.1093/cercor/10.9.910 [DOI] [PubMed] [Google Scholar]
  27. Coombes S. (2005). Waves, bumps, and patterns in neural field theories. Biol. Cybern. 93, 91–108. doi: 10.1007/s00422-005-0574-y [DOI] [PubMed] [Google Scholar]
  28. Coombes S. (2023). Next generation neural population models. Front. Appl. Mathem. Statist. 9:1128224. doi: 10.3389/fams.2023.1128224 [DOI] [Google Scholar]
  29. Coombes S., Wedgwood K. C. (2023). Neurodynamics. Cham: Springer. [Google Scholar]
  30. Daley D., Vere-Jones D. (2003). “Renewal processes,” in An Introduction to the Theory of Point Processes (Cham: Springer; ), 63–108. [Google Scholar]
  31. Dayan P., Abbott L. (2001). Theorethical Neuroscience. Cambridge, MA: MIT Press. [Google Scholar]
  32. Deco G., Jirsa V. K., McIntosh A. R. (2011). Emerging concepts for the dynamical organization of resting-state activity in the brain. Nat. Rev. Neurosci. 12, 43–56. doi: 10.1038/nrn2961 [DOI] [PubMed] [Google Scholar]
  33. Deco G., Jirsa V. K., Robinson P. A., Breakspear M., Friston K. (2008). The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput. Biol. 4:e1000092. doi: 10.1371/journal.pcbi.1000092 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Deco G., Ponce-Alvarez A., Hagmann P., Romani G. L., Mantini D., Corbetta M. (2014). How local excitation–inhibition ratio impacts the whole brain dynamics. J. Neurosci. 34, 7886–7898. doi: 10.1523/JNEUROSCI.5068-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Deco G., Schürmann B. (1998). Stochastic resonance in the mutual information between input and output spike trains of noisy central neurons. Physica D: Nonlinear Phenomena 117, 276–282. doi: 10.1016/S0167-2789(97)00313-8 [DOI] [Google Scholar]
  36. Devalle F., Roxin A., Montbrió E. (2017). Firing rate equations require a spike synchrony mechanism to correctly describe fast oscillations in inhibitory networks. PLoS Comput. Biol. 13:e1005881. doi: 10.1371/journal.pcbi.1005881 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Ditlevsen S., Lansky P. (2007). Parameters of stochastic diffusion processes estimated from observations of first-hitting times: application to the leaky integrate-and-fire neuronal model. Phys. Rev. E—Statist. Nonlinear Soft Matter Phys. 76:041906. doi: 10.1103/PhysRevE.76.041906 [DOI] [PubMed] [Google Scholar]
  38. Ditlevsen S., Samson A. (2016). Parameter estimation in neuronal stochastic differential equation models from intracellular recordings of membrane potentials in single neurons: a review. Journal de la Société Française de Statistique 157, 6–21. [Google Scholar]
  39. Ðorđević J. Milošević M. Šuvak N. (2023). Non-linear stochastic model for dopamine cycle. Chaos, Solitons & Fractals 177:114220. doi: 10.1016/j.chaos.2023.114220 [DOI] [Google Scholar]
  40. Dumont G., Henry J., Tarniceriu C. (2024). Oscillations in a fully connected network of leaky integrate-and-fire neurons with a poisson spiking mechanism. J. Nonlinear Sci. 34:18. doi: 10.1007/s00332-023-09995-x [DOI] [Google Scholar]
  41. ElGazzar A., van Gerven M. (2024). Generative modeling of neural dynamics via latent stochastic differential equations. arXiv [preprint] arXiv:2412.12112. doi: 10.48550/arXiv.2412.12112 [DOI] [PubMed] [Google Scholar]
  42. Ermentrout B., Terman D. H. (2010). Mathematical foundations of neuroscience, volume 35. Cham: Springer. [Google Scholar]
  43. Fasoli D. (2013). Attacking the Brain with Neuroscience: Mean-Field Theory, Finite Size Effects and Encoding Capability of Stochastic Neural Networks (PhD thesis: ). Université Nice Sophia Antipolis, Nice, France. [Google Scholar]
  44. FitzHugh R. (1961). Impulses and physiological states in theoretical models of nerve membrane. Biophys. J. 1, 445–466. doi: 10.1016/S0006-3495(61)86902-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Gerstner W. (2000). Population dynamics of spiking neurons: fast transients, asynchronous states, and locking. Neural Comput. 12, 43–89. doi: 10.1162/089976600300015899 [DOI] [PubMed] [Google Scholar]
  46. Gerstner W. (2001). Coding properties of spiking neurons: reverse and cross-correlations. Neural Netw. 14, 599–610. doi: 10.1016/S0893-6080(01)00053-3 [DOI] [PubMed] [Google Scholar]
  47. Gerstner W., Kistler W. M. (2002). Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press. [Google Scholar]
  48. Gerstner W., Kistler W. M., Naud R., Paninski L. (2014). Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge: Cambridge University Press. doi: 10.1017/CBO9781107447615 [DOI] [Google Scholar]
  49. Ghorbanian P., Ramakrishnan S., Simon A. J., Ashrafiuon H. (2013). “Stochastic dynamic modeling of the human brain EEG signal,” in Dynamic Systems and Control Conference (New York: American Society of Mechanical Engineers; ). [Google Scholar]
  50. Goldobin D. S., Ageeva M. V., di Volo M., Tixidre F., Torcini A. (2025). Synaptic shot noise triggers fast and slow global oscillations in balanced neural networks. Phys. Rev. E 112:034301. doi: 10.1103/47h5-fbyy [DOI] [PubMed] [Google Scholar]
  51. Goldobin D. S., Di Volo M., Torcini A. (2021). Reduction methodology for fluctuation driven population dynamics. Phys. Rev. Lett. 127:038301. doi: 10.1103/PhysRevLett.127.038301 [DOI] [PubMed] [Google Scholar]
  52. Greven N. E., Ranft J., Schwalger T. (2026). How random connectivity shapes the fluctuating dynamics of finite-size neural populations. PRX Life 4:013007. doi: 10.1103/shvm-x4x6 [DOI] [Google Scholar]
  53. Harrison L. M., David O., Friston K. J. (2005). Stochastic models of neuronal dynamics. Philosoph. Trans. Royal Soc. B 360, 1075–1091. doi: 10.1098/rstb.2005.1648 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Hawkes A. G. (1971). Spectra of some self-exciting and mutually exciting point processes. Biometrika 58, 83–90. doi: 10.1093/biomet/58.1.83 [DOI] [Google Scholar]
  55. Hodgkin A. L., Huxley A. F. (1990). A quantitative description of membrane current and its application to conduction and excitation in nerve. Bull. Math. Biol. 52, 25–71. doi: 10.1016/S0092-8240(05)80004-7 [DOI] [PubMed] [Google Scholar]
  56. Holmes P., Shea-Brown E., Moehlis J., Bogacz R., Gao J., Aston-Jones G., et al. (2005). Optimal decisions: From neural spikes, through stochastic differential equations, to behavior. IEICE Trans. Fund. Elect. Commun. Comp. Sci. 88, 2496–2503. doi: 10.1093/ietfec/e88-a.10.2496 [DOI] [Google Scholar]
  57. Hong D., Man S., Martin J. V. (2016). A stochastic mechanism for signal propagation in the brain: force of rapid random fluctuations in membrane potentials of individual neurons. J. Theor. Biol. 389, 225–236. doi: 10.1016/j.jtbi.2015.10.035 [DOI] [PubMed] [Google Scholar]
  58. İzgi B. (2015). Behavioral Classification of Stochastic Differential Equations in Mathematical Finance (PhD thesis: ). Istanbul Technical University, İstanbul, Turkey. [Google Scholar]
  59. İzgi B., Çetin C. (2017). Some moment estimates for new semi-implicit split-step methods. AIP Conf. Proc. 1833:020041. doi: 10.1063/1.4981689 [DOI] [Google Scholar]
  60. İzgi B., Çetin C. (2018). Semi-implicit split-step numerical methods for a class of nonlinear stochastic differential equations with non-lipschitz drift terms. J. Comput. Appl. Math. 343, 62–79. doi: 10.1016/j.cam.2018.03.027 [DOI] [Google Scholar]
  61. İzgi B., Çetin C. (2019). Milstein-type semi-implicit split-step numerical methods for nonlinear sde with locally lipschitz drift terms. Thermal Sci. 23, 1–12. doi: 10.2298/TSCI180912325I [DOI] [Google Scholar]
  62. İzgi B., Çetin C. (2021). Strong convergence of semi-implicit split-step methods for sde with locally lipschitz coefficients. Commun. Nonlinear Sci. Num. Simulat. 94:105574. doi: 10.1016/j.cnsns.2020.105574 [DOI] [Google Scholar]
  63. Izhikevich E. M. (2007). Dynamical Systems in Neuroscience. Cambridge, MA: MIT Press. [Google Scholar]
  64. Jansen B. H., Rit V. G. (1995). Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol. Cybern. 73:357–366. doi: 10.1007/BF00199471 [DOI] [PubMed] [Google Scholar]
  65. Jirsa V. K. (2017). The virtual epileptic patient: individualized whole-brain models of epilepsy spread. Neuroimage 145, 377–388. doi: 10.1016/j.neuroimage.2016.04.049 [DOI] [PubMed] [Google Scholar]
  66. Kang M.-J., Perthame B., Salort D. (2015). Dynamics of time elapsed inhomogeneous neuron network model. Comptes Rendus Mathematique 353, 1111–1115. doi: 10.1016/j.crma.2015.09.029 [DOI] [Google Scholar]
  67. Kloeden P. E., Platen E. (1992). Numerical Solution of Stochastic Differential Equations. Berlin: Springer-Verlag. [Google Scholar]
  68. Kramer M. A., Kirsch H. E., Szeri A. J. (2005). Pathological pattern formation and cortical propagation of epileptic seizures. J. Royal Soc. Interf. 2, 113–127. doi: 10.1098/rsif.2004.0028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Laing C., Lord G. J. (2009). Neural Coherence and Stochastic Resonance. Oxford: Oxford Academic. [Google Scholar]
  70. Lansky P., Ditlevsen S. (2008). A review of the methods for signal estimation in stochastic diffusion leaky integrate-and-fire neuronal models. Biol. Cybern. 99, 253–262. doi: 10.1007/s00422-008-0237-x [DOI] [PubMed] [Google Scholar]
  71. Lindner B. (2022). Fluctuation-dissipation relations for spiking neurons. Phys. Rev. Lett. 129:198101. doi: 10.1103/PhysRevLett.129.198101 [DOI] [PubMed] [Google Scholar]
  72. Lopour B. A., Szeri A. J. (2010). A model of feedback control for the charge-balanced suppression of epileptic seizures. J. Comput. Neurosci. 28, 375–387. doi: 10.1007/s10827-010-0215-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Luke T. B., Barreto E., So P. (2013). Complete classification of the macroscopic behavior of a heterogeneous network of theta neurons. Neural Comput. 25, 3207–3234. doi: 10.1162/NECO_a_00525 [DOI] [PubMed] [Google Scholar]
  74. MacIver A., Shaheen H. (2024). Modelling alzheimer's protein dynamics: A data-driven integration of stochastic methods, machine learning and connectome insights. arXiv [preprint] arXiv:2411.02644. doi: 10.48550/arXiv.2411.02644 [DOI] [Google Scholar]
  75. Mao X. (2011). Stochastic Differential Equations and Applications. Sawston: Woodhead Publishing. [Google Scholar]
  76. Mao X. (2015). The truncated euler-maruyama method for stochastic differential equations. J. Comput. Appl. Math. 290, 370–384. doi: 10.1016/j.cam.2015.06.002 [DOI] [Google Scholar]
  77. Mattia M., Del Giudice P. (2002). Population dynamics of interacting spiking neurons. Phys. Rev. E 66:051917. doi: 10.1103/PhysRevE.66.051917 [DOI] [PubMed] [Google Scholar]
  78. Mattia M., Del Giudice P. (2004). Finite-size dynamics of inhibitory and excitatory interacting spiking neurons. Phys. Rev. E—Statist. Nonlinear Soft Matter Phys. 70:052903. doi: 10.1103/PhysRevE.70.052903 [DOI] [PubMed] [Google Scholar]
  79. Mischler S., Qui ninao C., Touboul J. (2016). On a kinetic fitzhugh-nagumo model of neuronal network. Commun. Mathem. Phys. 342, 1001–1042. doi: 10.1007/s00220-015-2556-9 [DOI] [Google Scholar]
  80. Montbrió E., Pazó D., Roxin A. (2015). Macroscopic description for networks of spiking neurons. Phys. Rev. X 5:021028. doi: 10.1103/PhysRevX.5.021028 [DOI] [Google Scholar]
  81. Morris C., Lecar H. (1981). Voltage oscillations in the barnacle giant muscle fiber. Biophys. J. 35, 193–213. doi: 10.1016/S0006-3495(81)84782-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Moye M. J., Diekman C. O. (2018). Data assimilation methods for neuronal state and parameter estimation. J. Mathem. Neurosci. 8:11. doi: 10.1186/s13408-018-0066-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Nagumo J., Arimoto S., Yoshizawa S. (1962). An active pulse transmission line simulating nerve axon. Proc. IRE 50, 2061–2070. doi: 10.1109/JRPROC.1962.288235 [DOI] [Google Scholar]
  84. Naze S., Bernard C., Jirsa V. (2015). Computational modeling of seizure dynamics using coupled neuronal networks: factors shaping epileptiform activity. PLoS Comput. Biol. 11:e1004209. doi: 10.1371/journal.pcbi.1004209 [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Oksendal B. (2013). Stochastic Differential Equations: An Introduction with Applications. Cham: Springer. [Google Scholar]
  86. Ostojic S., Brunel N. (2011). From spiking neuron models to linear-nonlinear models. PLoS Comput. Biol. 7:e1001056. doi: 10.1371/journal.pcbi.1001056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Ott E., Antonsen T. M. (2008). Low dimensional behavior of large systems of globally coupled oscillators. Chaos 18:2930766. doi: 10.1063/1.2930766 [DOI] [PubMed] [Google Scholar]
  88. Pakdaman K., Perthame B., Salort D. (2009). Dynamics of a structured neuron population. Nonlinearity 23:55. doi: 10.1088/0951-7715/23/1/003 [DOI] [Google Scholar]
  89. Pakdaman K., Perthame B., Salort D. (2013). Relaxation and self-sustained oscillations in the time elapsed neuron network model. SIAM J. Appl. Math. 73, 1260–1279. doi: 10.1137/110847962 [DOI] [Google Scholar]
  90. Pakdaman K., Perthame B., Salort D. (2014). Adaptation and fatigue model for neuron networks and large time asymptotics in a nonlinear fragmentation equation. J. Mathem. Neurosci. 4:14. doi: 10.1186/2190-8567-4-14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Perthame B., Rieutord C., Salort D. (2025). Strongly nonlinear age-structured equation, time-elapsed model and large delays: B. Perthame et al. J. Mathem. Biol. 91:65. doi: 10.1007/s00285-025-02294-x [DOI] [PubMed] [Google Scholar]
  92. Perthame B., Salort D. (2013). On a voltage-conductance kinetic system for integrate and fire neural networks. Kinetic Related Models 6, 841–864. doi: 10.3934/krm.2013.6.841 [DOI] [Google Scholar]
  93. Pietras B., Gallice N., Schwalger T. (2020). Low-dimensional firing-rate dynamics for populations of renewal-type spiking neurons. Phys. Rev. E 102:022407. doi: 10.1103/PhysRevE.102.022407 [DOI] [PubMed] [Google Scholar]
  94. Pisarchik A. N., Hramov A. (2023). Stochastic processes in the brain's neural network and their impact on perception and decision-making. Uspekhi Fizicheskikh Nauk 193, 1298–1324. doi: 10.3367/UFNr.2022.12.039309 [DOI] [Google Scholar]
  95. Platen E., Bruti-Liberati N. (2010). Numerical Solution of Stochastic Differential Equations with Jumps in Finance. Berlin: Springer-Verlag. [Google Scholar]
  96. Raisi-Nafchi M., Tajmirriahi M., Rabbani H., Amini Z. (2025a). Detecting isocitrate dehydrogenase mutation status in grade iv gliomas using stochastic differential equations. Biomed. Signal Process. Cont. 110:108245. doi: 10.1016/j.bspc.2025.108245 [DOI] [Google Scholar]
  97. Raisi-Nafchi M., Tajmirriahi M., Rabbani H., Amini Z. (2025b). Stochastic differential equation modeling approach for grading astrocytomas on brain MRI images. Sci. Rep. 15:22835. doi: 10.1038/s41598-025-06144-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Raman K. (2011). A stochastic differential equation analysis of cerebrospinal fluid dynamics. Fluids Barriers CNS 8:9. doi: 10.1186/2045-8118-8-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. René A., Longtin A., Macke J. H. (2020). Inference of a mesoscopic population model from population spike trains. Neural Comput. 32, 1448–1498. doi: 10.1162/neco_a_01292 [DOI] [PubMed] [Google Scholar]
  100. Rinzel J., Huguet G. (2013). Nonlinear dynamics of neuronal excitability, oscillations, and coincidence detection. Commun. Pure Appl. Mathem. 66, 1464–1494. doi: 10.1002/cpa.21469 [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Rinzel J., Lee Y. S. (1987). Dissection of a model for neuronal parabolic bursting. J. Math. Biol. 25, 653–675. doi: 10.1007/BF00275501 [DOI] [PubMed] [Google Scholar]
  102. Ritter P., Schirner M., McIntosh A. R., Jirsa V. K. (2013). The virtual brain integrates computational modeling and multimodal neuroimaging. Brain Connect. 3, 121–145. doi: 10.1089/brain.2012.0120 [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Robert P., Vignoud G. (2021). Stochastic models of neural synaptic plasticity. SIAM J. Appl. Math. 81, 1821–1846. doi: 10.1137/20M138288X [DOI] [Google Scholar]
  104. Saarinen A., Linne M. L., Yli-Harja O. (2008). Stochastic differential equation model for cerebellar granule cell excitability. PLoS Comput. Biol. 4:e1000004. doi: 10.1371/journal.pcbi.1000004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Sato F., Ogura M., Sashie A., Bai Y., Shimono M., Wakamiya N. (2025). Neural sde-based spike control of noisy neurons. PLoS ONE 20:e0330607. doi: 10.1371/journal.pone.0330607 [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Schiff S. J. (2011). Neural Control Engineering: the Emerging Intersection Between Control Theory and Neuroscience. Cambridge, MA: MIT Press. [Google Scholar]
  107. Schmutz V., Locherbach E., Schwalger T. (2023). On a finite-size neuronal population equation. SIAM J. Appl. Dynam. Syst. 22, 996–1029. doi: 10.1137/21M1445041 [DOI] [Google Scholar]
  108. Schwalger T., Chizhov A. V. (2019). Mind the last spike—firing rate models for mesoscopic populations of spiking neurons. Curr. Opin. Neurobiol. 58, 155–166. doi: 10.1016/j.conb.2019.08.003 [DOI] [PubMed] [Google Scholar]
  109. Schwalger T., Deger M., Gerstner W. (2017). Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS Comput. Biol. 13:e1005507. doi: 10.1371/journal.pcbi.1005507 [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Schwalger T., Droste F., Lindner B. (2015). Statistical structure of neural spiking under non-poissonian or other non-white stimulation. J. Comput. Neurosci. 39, 29–51. doi: 10.1007/s10827-015-0560-x [DOI] [PubMed] [Google Scholar]
  111. Schwalger T., Lindner B. (2015). Analytical approach to an integrate-and-fire model with spike-triggered adaptation. Phys. Rev. E 92:062703. doi: 10.1103/PhysRevE.92.062703 [DOI] [PubMed] [Google Scholar]
  112. Sinha N. (2019). Computer modelling of connectivity change suggests epileptogenesis mechanisms in idiopathic generalised epilepsy. NeuroImage: Clini. 21:101655. doi: 10.1016/j.nicl.2019.101655 [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Staii C. (2025). Stochastic models of neuronal growth. AppliedMath 5:170. doi: 10.3390/appliedmath5040170 [DOI] [Google Scholar]
  114. Stubenrauch J., Lindner B. (2024). Furutsu-novikov-like cross-correlation-response relations for systems driven by shot noise. Phys. Rev. X 14:041047. doi: 10.1103/PhysRevX.14.041047 [DOI] [Google Scholar]
  115. Tajmirriahi M., Amini Z. (2021). Modeling of seizure and seizure-free eeg signals based on stochastic differential equations. Chaos, Solitons & Fractals 150:111104. doi: 10.1016/j.chaos.2021.111104 [DOI] [Google Scholar]
  116. Tierney N. (2024). Stochastic Modeling of Neuron Dynamics (PhD thesis: ). Worcester Polytechnic Institute, Worcester, MA, United States. [Google Scholar]
  117. Uma D., Jafari H., Raja Balachandar S., Venkatesh S. G., Vaidyanathan S. (2025). An approximate solution for stochastic fitzhugh-nagumo partial differential equations arising in neurobiology models. Math. Methods Appl. Sci. 48, 2980–2998. doi: 10.1002/mma.10471 [DOI] [Google Scholar]
  118. Wallace E., Benayoun M., van Drongelen W., Cowan J. (2011). Emergent oscillations in networks of stochastic spiking neurons. PLoS ONE 6:e14804. doi: 10.1371/journal.pone.0014804 [DOI] [PMC free article] [PubMed] [Google Scholar]
  119. Wang J., Niebur E., Hu J., Li X. (2016). Suppressing epileptic activity in a neural mass model using a closed-loop proportional-integral controller. Sci. Rep. 6:27344. doi: 10.1038/srep27344 [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Wang P., Li Y. (2010). Split-step forward methods for stochastic differential equations. J. Comput. Appl. Math. 233, 2641–2651. doi: 10.1016/j.cam.2009.11.010 [DOI] [Google Scholar]
  121. Wegman E. J., Habib M. K. (1992). Stochastic methods for neural systems. J. Stat. Plan. Inference 33, 5–25. doi: 10.1016/0378-3758(92)90092-7 [DOI] [Google Scholar]
  122. Wilson H. R., Cowan J. D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12, 1–24. doi: 10.1016/S0006-3495(72)86068-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Worsham J. M., Kalita J. K. (2025). A guide to neural ordinary differential equations: Machine learning for data-driven digital engineering. Digit. Eng. 6:100060. doi: 10.1016/j.dte.2025.100060 [DOI] [Google Scholar]
  124. Yasue K., Jibu M., Misawa T., Zambrini J. C. (1988). Stochastic neurodynamics. Ann. Inst. Stat. Math. 40, 41–59. doi: 10.1007/BF00053954 [DOI] [Google Scholar]

Articles from Frontiers in Computational Neuroscience are provided here courtesy of Frontiers Media SA

RESOURCES