Abstract
Objective.
Understanding the mechanisms underlying brain dynamics is a long-held goal in neuroscience. However, these dynamics are both individualized and nonstationary, making modeling challenging. Here, we present a data-driven approach to modeling nonstationary dynamics based on principles of neuromodulation, at the level of individual subjects.
Approach.
Previously, we developed the mesoscale individualized neural dynamics (MINDy) modeling approach to capture individualized brain dynamics which do not change over time. Here, we extend the MINDy approach by adding a modulatory component which is multiplied by a set of baseline, stationary connectivity weights. We validate this model on both synthetic data and publicly available EEG data in the context of anesthesia, a known modulator of neural dynamics.
Main Results.
We find that our modulated MINDy approach is accurate, individualized, and reliable. Additionally, we find that our models yield biologically interpretable inferences regarding the effects of propofol anesthesia on mesoscale cortical networks, consistent with previous literature on the neuromodulatory effects of propofol.
Significance.
Ultimately, our data-driven modeling approach is reliable and scalable, and provides insight into mechanisms underlying observed brain dynamics. Our modeling methodology can be used to infer insights about modulation dynamics in the brain in a number of different contexts.
1. Introduction
An important and persistent challenge in the analysis of recorded mesoscale neural activity (i.e., commensurate with externally recorded fields and potentials) is the inference of latent neurophysiological mechanisms that underlie overt observations. Indeed, while there are myriad tools and methods to analyze brain electrophysiological recordings (e.g., power spectral density estimation [1]), there are fewer that provide direct inference of circuit-level mechanisms (i.e., the interaction of excitatory and inhibitory neural subpopulations, from which said recordings originate). Identifying and understanding these mechanisms is a difficult task, however, as they are not directly observable via typical mesoscale recording modalities. For instance, electroencephalography (EEG) measures electrical potential non-invasively at the scalp, and hence does not provide direct access to neuronal sub-population-level activity [2]. Parametric dynamical systems modeling offers a methodological path to obviating this issue. Such models, via their mathematical formulation, embed mechanistic hypotheses or inductive biases regarding how neural activity and secondary observables such as EEG are generated. While dynamical systems modeling is potentially powerful in this context, there are several extant challenges that remain unsolved regarding the construction of such models from neural recordings.
First, neural activity patterns vary between individuals, indicating that the underlying mechanisms also vary on an individual level. For example, the posterior dominant rhythm is a classic example of neural dynamics, which manifests as a strong alpha-band (i.e., 10–14 Hz) oscillation localized to the posterior of the scalp in EEG recordings (i.e., overlaying occipital cortex). While the general characteristics of the posterior dominant rhythm (its frequency and spatial location) are consistent across individuals [3], the exact frequency and power of the posterior dominant rhythm is specific to individuals [4]. To address this challenge of individuality, work has been done to create data-driven models based on single-subject data [5, 6], providing individualized neural dynamics models which can be used to infer mechanisms underlying a person’s brain activity [7, 8, 9]. This approach is schematized in Figure 1a.
Figure 1. Approaches to fitting individualized models from data.
a) In a stationary setting, a single set of (time-invariant) model parameters (here, a connectivity/weight matrix) is fit to data, yielding a corresponding stationary prediction/forward simulation. b) When data is non-stationary, fitting a single stationary model will in general lead to erroneous or averaged modeled dynamics. c) A typical way to address non-stationary data is to fit separate models (e.g., distinct weight matrices) to the distinct non-stationary regimes. d) In the proposed approach, we seek to fit non-stationary models that decompose parameters into a baseline connectivity and regime-specific modulatory matrices.
In addition to varying between individuals, neural dynamics are also highly non-stationary, i.e., they vary temporally based on many factors such as modulation of neurophysiologic states (e.g., sleep vs. wake [10], rest vs. task [11], and healthy activity vs. pathology [12]). Therefore, an individual dynamical systems model of the kind mentioned above can typically only offer insights for a specific physiological regime, and/or for a relatively narrow epoch of time during which dynamics are stationary [13]. There is an unmet need for data-driven modeling methods that can capture non-stationarity in dynamics associated with multiple neurophysiologic states. If an approach is taken that does not account for this nonstationarity, the model which is returned will be a poor fit for any of the regimes present in the data (Figure 1b).
To overcome this challenge, several approaches to time- or state-dependent dynamical systems models for neural activity have been suggested. In essence, these models embed a mechanism to modify the model parameters in a manner that captures categorical changes in neural dynamics (e.g., sleep vs. wake). For instance, the switching linear dynamical system (SLDS) framework [14, 15] embeds multiple linear dynamical systems, of which a single model is “active” at any given time [16, 17, 18], as shown in Figure 1c. Switches between models are often enacted through a latent model, such as a hidden Markov model [19] that provides the dynamics behind switches [20, 21]. Through such a mechanism, a model can embed a number of distinct dynamical regimes. A similar approach is taken in [22, 23], but here the researchers use a recurrent neural network (RNN) model, rather than a linear dynamical system. They construct a discrete number of RNNs (each with a different recurrent weight matrix), thus implementing a different set of dynamics for each RNN.
It is important to note that modeling non-stationarity can be understood as a problem with two phases: i) modeling when dynamical regimes change, and ii) modeling what in the latent dynamics has changed. Our proposed approach tackles the latter phase. Specifically, we infer the changes in latent dynamics as changes in mesoscale neuromodulation (Figure 1d), rather than comparing distinct dynamic regimes (Figure 1c). In this paper, we take a similar approach to [22, 23], beginning with a biologically interpretable RNN model describing population-level neural activity. However, in contrast to the approach of having a discrete set of recurrent weight matrices, we implement a modulation architecture (Figure 1d). To our knowledge, data-driven inference of neuromodulatory effects on mesoscale dynamics has not been previously pursued in this way.
Our model consists of a single weight matrix common to all dynamical regimes, which is then multiplied element-wise by one of a discrete number of modulatory matrices corresponding to different dynamical regimes. Such an assumption is based, schematically, on the actions of neuromodulators on neural circuits that modulate synaptic efficacy. We assume that such modulation occurs on a slower timescale than the timescale of neural activity. By using this modulation structure, we are able to separate the aspects of an individual’s dynamics which are common across time and neurophysiologic state from those which are variable. Additionally, we impose specific excitatory and inhibitory sub-structures on our parameters, in order to preserve the biological interpretability of our fitted models. All model parameters are estimated directly from data in an individualized manner.
Below, we develop and present the proposed methodology. We first introduce our modulated recurrent neural network architecture, building from our prior data-driven modeling approach of Mesoscale Individualized Neural Dynamics (MINDy) [8, 9] to enact the aforementioned modulation architecture. We then validate this on simulated data, to test the accuracy and reliability of our model in recovering known ground truth parameters. We then test our model on the ability to infer distinct neural mechanisms associated with levels of general anesthesia, a pharmacological modulation of neural circuits. These data allow us to further validate the reliability and individuality of our fitted models. We then reconcile the inferred models and mechanisms with prior observations regarding the modulatory effects of anesthesia on cortical networks.
2. Methods
2.1. Model formation
Our model is adapted from our prior whole-brain dynamical systems framework in [8, 9]. The base (i.e., unmodulated) dynamics are governed by:
| (1) |
| (2) |
where represents the neural activity of neural populations at time , and are the tunable model parameters. Here, is the connectivity matrix, is a diagonal matrix representing the decay (leak) in neural activity for each population, and parameterize the slope and offset of the sigmoidal activation function, and represents a baseline bias of each neural population. is the lead field model which translates the population-level neural activity into the measured data, and and represent the process noise and measurement noise, respectively.
Note that this model is mathematically comparable to a vanilla recurrent neural network, but arranged and constrained to reflect biologically interpretable relationships between excitatory and inhibitory neural populations. By assigning neural populations to spatial locations in the brain and constraining the parameters to specific valence (i.e., positive/excitatory, negative/inhibitory), we can examine changes in these parameters associated with physiological changes. Specifically, we enforce excitatory and inhibitory substructures onto the neural population activity vector and the connectivity matrix like so:
| (3) |
Entries in and are constrained to be positive or 0, as they represent the connections from excitatory neural populations. Additionally, and are full submatrices. Conversely, and have entries which are negative or 0, as they represent the connections from inhibitory neural populations. Since inhibitory neurons only have local connections [24], these submatrices are constrained to be diagonal.
We adapted and generalized this model by adding neuromodulation via matrices multiplied element-wise by the connectivity matrix :
| (4) |
Here, represents the number of modulatory states to be modeled. This modulation structure is schematized in Figure 1d. To preserve the signed connectivity structure of , entries in each matrix are constrained to be positive or 0. This enables a modulation which scales entries in by varying amounts, without affecting the base structure of the connectivity.
The specification of is made to reflect the specifics of the data modality being used for model construction. In our case, because we are using cortical data (EEG), we account for prevailing hypotheses regarding the contributions of different cell types to surface-level potentials. In this context, it is generally believed that inhibitory neurons are not close enough to the surface of the cortex, nor do they possess the spatial organization, to generate fields detectable via EEG [25]. Therefore, we construct our lead field matrix to have zeros in the submatrix multiplied by the inhibitory component of :
| (5) |
2.2. Model fitting procedure
Because is not invertible, we now face what is sometimes termed a dual inference problem: i) estimate and ii) estimate the parameters , from the observable (i.e., EEG) data.
To address this, we adopt the iterative estimation approach detailed in [9, 26]: first we apply a Kalman filter [27, 28] to a small window of data in order to estimate the state. We then evolve the model forward from the (estimated) state to generate a forward prediction. We then backpropagate the error through both the free simulation steps and the Kalman filtering steps, calculating the error gradient for each parameter at each step. In addition to calculating the parameter error gradients (i.e., error gradients of ), we also calculate the error gradients of the estimated covariances of the noise terms and . These noise covariances are necessary for the Kalman filter to estimate the current state , and are optimized alongside the model parameters. By backpropagating the error in this manner and fitting both the parameters and the noise covariances, we fit a model which produces the most accurate state estimator which can best predict future measurements.
After backpropagation through the free simulation and Kalman filtering steps, we then update the model parameters and the estimates of the noise covariances. Then, the process is repeated with a new small window in the epoch of fitting data. This process of Kalman filtering, free simulation and backpropagation is repeated with new data windows until the Kalman and free simulation errors converge. Multiple windows of the data epoch are used so that the model captures the general dynamics and statistical properties of the entire epoch, to avoid overfitting to a few timesteps of data. These windows are selected at random to avoid biasing the fit toward any one period within the epoch (e.g., the beginning or end of the epoch).
In the unmodulated MINDy model, the parameters do not vary temporally, so all parameters are updated in all steps of the model. In the expanded modulated MINDy model proposed here, only one of the matrices is used at each timepoint. To account for this, we select the appropriate at each time using the modulation labels and use that for model evolution and gradient calculation.
To aid in the tractability of the problem, we implement several constraints on and . We constrain and to have 75% of their non-diagonal connections be zero, enforcing a prior level of sparsity in connections. We also constrain each to be rank 1, i.e., the outer product of two -dimensional vectors. This assumption is motivated by the premise that neuromodulators act in a spatially diffuse manner [29]. Because each is multiplied element-wise by , the excitatory submatrices of each also effectively have 75% of their elements set equal to 0.
To fit the model parameters, we use NADAM [30], implemented with PyTorch’s Autograd engines [31]. This improves fitting efficiency and scalability relative to our prior work, by allowing GPU acceleration, which can be especially beneficial for processing large-scale EEG recordings. Notably, this significantly reduces the number of iterations needed for the convergence of the backpropagated Kalman filter approach. With Autograd, both the covariance update and local linearization process are included in the backward operation.
2.3. Simulation and actual data
2.3.1. Synthetic data
To enable the validatation of our fitting procedure, we created synthetic data with known parameters. To create this synthetic data, we established models with a combination of fixed and random parameters. The fixed values and distributions for the model parameters are listed in Table 1. It should be noted that our submatrices were constructed as a linear combination of a sparse and low-rank matrix via:
| (6) |
where is a sparse matrix, are low rank matrices, and is a vector specifying the diagonal self-connection weights. Note that (6) specifies , but both and were constructed in this way. and were constructed as diagonal matrices with diagonal values directly sampled from their distributions.
Table 1.
Parameter values or random distributions used to create synthetic data models. I denotes the identity matrix, and 1 denotes the vector of all ones.
| Parameter | Description | Initialization Value |
|---|---|---|
|
| ||
| Sparse part of connectivity matrix | ||
| , | Low rank part of connectivity matrix | |
| Excitatory nonlinearity slope | 2.5 | |
| Inhibitory nonlinearity slope | 1 | |
| Nonlinearity offset | 0 | |
| Excitatory decay | ||
| Inhibitory decay | ||
| Baseline neural activity | 0 | |
| Lead field for excitatory populations | ||
| Measurement noise covariance | 0.25I | |
| Process noise covariance | ||
| nS | Number of modulation states | 3 |
| Mean of | ||
| (uniform) | Variance of | |
| (normal) | Variance of | |
| (uniform) | Vector constructing | |
| (normal) | Vector constructing | |
| HMM transition probability | ||
| Initial HMM state probability | (1/nS)1 | |
We initialized each matrix as the outer product of a vector drawn from a unique distribution. To construct each unique distribution, we first randomly generate its mean, , from . We then generate a random binary digit indicating whether we should use a uniform distribution, or a normal distribution. Then, we generate a variance for the distribution. If we are using a uniform distribution, , and if we are using a normal distribution, . Then, we generate a vector, , from the constructed distribution: if uniform, ; if normal, . Then, .
We chose the distributions for and such that there could be variation in the distributions of , while also maintaining values close to 1. If the values of are very large or very small, will overwhelm in the effective neural connectivity . In other words, our assumed modulation does not re-scale synaptic weights by large amounts.
To generate state changes in our synthetic data, we initialized a hidden Markov model (HMM) with transition probability matrix and initial state probability . At each timestep, we calculated the probability of the next state based on and then randomly selected the state index weighted by the calculated probability.
Once we had generated our random models, we forward simulated them for 20,000 timesteps (equivalent to 80s of 250Hz EEG) to create synthetic data. When fitting on this synthetic data, the true lead field matrix and noise covariances (, and ) were used to initialize the models, but the other model parameters were initialized randomly and compared to the true values after fitting, to test parameter recovery when ground truth is known. Since the true value of is known, however, we create a mask zeroing out the same entries in the fit , to avoid zeroing out connections which are actually present in the synthetic models.
2.3.2. EEG data
For our second experiment, we used EEG data of 20 subjects dosed with propofol published in [32] and available online [33]. In this data, subjects were recorded at four levels of sedation: i) resting baseline with no anesthesia, ii) mild sedation (defined as 0.6 μg/mL target blood plasma concentration), iii) moderate sedation (1.2 μg/mL target blood plasma concentration), and iv) recovery from sedation. Each sedation level had approximately 7 minutes of data, recorded after 10 minutes of allowing the blood plasma level to reach a steady state and was saved as a separate EEG file. This data is highly compatible for our purposes because the four pharmacological regimes above can be used as labels for constructing and validating our modulated model.
We filtered the data between 0.5 and 15Hz, subtracted the median of each channel, and divided by the mean absolute deviation of each channel. We combined all four sedation state recordings into a single array for each subject, and added a sedation state index for each timepoint of the array. We also used a subset of the recorded channels in [32], using only 20 channels, shown in Figure 2.
Figure 2.

EEG channels used in analysis of propofol dosage EEG data.
Our matrix was constructed as in (5), with
| (7) |
where denotes the 20×20 identity matrix and denotes the 20-dimensional vector of all ones. We constructed and as diagonal matrices, with and . The other model parameters were initialized randomly, as in experiment 1. In this experiment, we do not have a ground truth of the 75% of non-diagonal connections in the and submatrices which are zero as we did in the synthetic data case. To continue enforcing this constraint, we selected a random 75% of non-diagonal connections in and , and set these to 0 for all subjects. Finally, because we wanted to explore modulations relative to baseline (prior to propofol dosing), we enforce , i.e., . The other matrices () are fit as all in the synthetic data experiment.
3. Results
3.1. Modulated MINDy recovers ground truth connections and modulations
Our first experiment tested whether modulated MINDy could accurately recover the connectivity and modulation matrices in models with known ground-truth parameters. To benchmark this, we compared the accuracy of parameter inferences for the modulated MINDy architecture in the presence of multiple non-stationary regimes to the performance of the unmodulated MINDy architecture (previously validated in [9]) on data in which there is only a single, stationary regime. That is, for the unmodulated MINDy results, we fit unmodulated MINDy models to single-regime synthetic data generated by using the simulated data setup specified in [9].
As shown in Figure 3a, we achieve ground truth correlations of nearly the same level as in the unmodulated MINDy problem. The unmodulated MINDy models achieve a median correlation coefficient of (IQR: 0.9720–0.9866) for the full correlation, whereas the modulated MINDy models yield a correlation coefficient of (IQR: 0.8539–0.9362). The EE and EI submatrices had similar correlations: MINDy EE, (IQR: 0.9519–0.9957); Modulated MINDy EE, (IQR: 0.8238–0.9408); MINDy EI, (IQR: 0.9506–0.9895), Modulated MINDy EI, (IQR: 0.8615–0.9495). Since we are fitting not only one connectivity matrix but also several matrices, a slight decrease in fit quality is expected relative to the unmodulated problem.
Figure 3. Modulated MINDy returns models with high ground truth correlations.
(a) Ground truth correlations for the full connectivity, excitatory-excitatory connections, and excitatory-inhibitory connections for both unmodulated and modulated MINDy problems. (b) Ground truth correlations for the connectivity matrix () and the connectivity modulation matrices () in the modulated MINDy problem.
Additionally, we achieved similarly high ground truth correlations for the EE and EI components of the modulation matrices (IQR: 0.8902–0.9434) EE, (IQR: 0.8618–0.9385) EI; (IQR: 0.8864–0.9421) EE, (IQR: 0.7665–0.9343) EI; (IQR: 0.9011–0.9411) EE, (IQR: 0.8216–0.9435) EI, as shown in Figure 3b. These results indicate that we can accurately recover both the true base connectivity and the true modulations of that connectivity.
Figure 4 provides examples of individual connection and modulation matrices and their recovered estimates via modulated MINDy (i.e., the proposed method). As shown, the spatial patterns of high- and low-amplitude connections in the true matrices are replicated in the estimated matrices. The estimated tends to have a slightly higher amplitude than the true , and the estimated tends to have a slightly lower amplitude than the true . This is to be expected, since we only regularized the structure of and , and did not regularize their amplitude. In summary, we can conclude that the modulated MINDy model can estimate the connectivity and modulation parameters to within a potential scaling factor of the true value.
Figure 4. Modulated MINDy returns connectivity and modulation matrices accurate to a potential scaling factor.
True (top row) and estimated (bottom row) and matrices from a single subject.
3.2. Modulated MINDy is reliable and individualized
In our second experiment, we tested modulated MINDy on actual EEG data to understand how it functioned on a population of individuals. We split each subject’s data in half temporally within each sedation level, in order to perform a test-retest reliability analysis.
We observed that models were reliable, showing high correlations within a single subject ( (IQR: 0.7187–0.8118); (IQR: 0.7276–0.8746); (IQR: 0.6054–0.7202; Figure 5). Additionally, correlations across subjects were significantly lower ( (IQR: 0.5722–0.6802), (IQR: 0.5760–0.7252), (IQR: 0.4275–0.5821), ), indicating that models are individualized, a requisite condition for capturing dynamics and mechanisms which vary across individuals.
Figure 5. Modulated MINDy is reliable and individualized.

Test-retest correlations on split-half data are significantly higher within subject than across subject in the full connectivity matrix, as well as the EE and EI submatrices.
3.3. Models identify consistent modulation associated with pharmacology
Having established validity in ground-truth settings, we proceeded to apply the method to labeled EEG data from stages of propofol anesthesia. Specifically, for each of 20 individuals, we inferred a base connectivity matrix for the pre-sedation regime, as well as modulation matrices for the regimes of mild anesthesia, moderate anesthesia and recovery. To test our method’s ability to provide insight into the observed changes into dynamics, we analyzed the modulation matrices at each sedation level compared to baseline.
We observed first that the vast majority of the values within the matrices fall within the range [0, 1), though a small percentage (: 0.169%, : 0.150%, : 0.128%) are greater than 1 (Figure 6). Since these values are multiplying the base connection weights , a fractional value in indicates a relative decrease of connection strength, or a relative increase in inhibition. In the population average, all values are fractional. This pattern of the modulations increasing inhibition is consistent with the widespread inhibitory effects of propofol [34, 35].
Figure 6. Modulated MINDy identifies inhibitory modulation during sedation.

Distributions of all modulation values in each sedation state.
Importantly, we went beyond these bulk inferences and also analyzed the spatiotemporal changes associated with each modulation regime. We found that in the population mean, the modulation values which were highest (i.e., smallest relative inhibition) were typically connections to populations underlying posterior channels, particularly in the moderate sedation state (Figure 7a). This is compatible with prior characterizations of the effect of propofol on attenuating the posterior regions of the scalp [36]. We also noted that each modulation state introduced specific spatial changes to the effective connectivity. To quantify this, we define a measure of the modulation’s impact, defined as the difference between two temporally adjacent modulations matrices scaled by , e.g., . Thus, we account for both the magnitude differences in as well as the base magnitude of the connection weights which are being modulated. A positive impact indicates a strengthened connection, and a negative impact indicates a weakened connection.
Figure 7. Modulated MINDy identifies inhibitory, spatially focused modulation during sedation.
a) Mean post-synaptic modulation in population mean of modulation matrices in sedation states. b) Post-synaptic modulation impact (difference between two temporally adjacent modulation matrices, scaled by ) across transitions between sedation states (population mean). c) Distributions of EE modulation values in frontal channels. d) Correlation between moderate sedation post-synaptic impact and recovery post-synaptic impact.
As the subjects move from pre-sedation to mild sedation, there is a widespread weakening of connections across the whole brain, but again particularly in the posterior areas (Figure 7c, panel 1). As the subjects progress from mild to moderate sedation and from moderate sedation to recovery, the changes are of smaller, less significant magnitude (Figure 7c). The change from mild to moderate sedation is characterized by a further weakening of the frontal connections (Figure 7b). This general observation is consistent with the observation of anteriorization of neural activity in the time before loss of consciousness[36, 37], in which the loss of posterior EEG power is accompanied by a potentiation of activity in frontal regions (note the posterior-anterior dichotomy in inferrered modulatory impact, especially from baseline to mild sedation, panel 1 of Figure 7c). Anteriorization has been modeled as a differential effect of inhibition along the posterior-anterior axis [38, 39]. Interestingly, as the subjects transition from moderate sedation to recovery, there is a reversal of the effects seen in the moderate sedation (Figure 7c), highlighted by the negative correlations between the impact of moderate sedation and the impact of recovery (Figure 7d). This indicates a gradual decrease in EE connection strength as the subjects are dosed with anesthesia, followed by a return to a state similar to the mild sedation in the recovery phase.
Overall, these results indicate that the methodology is identifying modulatory effects that are consistent with what we would expect from prior detailed studies on the mesoscale effects of propofol anesthesia, thus supporting the validity of the proposed approach.
4. Discussion
4.1. Data-driven inference of modulation for understanding non-stationary brain dynamics
Brain dynamics are fundamentally nonstationary, changing as individuals switch between varying tasks, cycle between sleep and wake states, or as pathologies within the brain improve or worsen. Such non-stationarity is mediated, at least in part, by processes of neuromodulation that potentiate or attenuate synaptic connections within and between brain areas. Our goal in this paper was to develop a data-driven, parametric modeling framework for identifying such modulation at whole-brain scales.
Our specific approach was to formulate modulation within a physiologically interpretable RNN construct, where a baseline set of synaptic weights – common to all non-stationary regimes – is scaled by regime-specific modulation. Within this framework are several nontrivial technical challenges. Specifically, because our model is formulated at the level of latent (unobserved) neural populations, we faced the dual-estimation problem of fitting model parameters and estimating state variables at the same time. Compounding this is the need to fit not one connection matrix, but rather a family of such matrices. Since the base connection weights are multiplied by the modulations , this is a fundamentally ill-posed problem. However, we showed that by imposing appropriate priors on the construction of both and , the problem can be made tractable, leading to interpretable results.
At a mathematical level, our modeling setup involves, in essence, a matrix factorization problem: there are a number of distinct effective connectivity matrices (), which are then factorized into a component common across all regimes (), and a modulatory component which varies discretely based on regime (). While there are algorithms for the decomposition of a matrix into the Hadamard product of two matrices [40, 41], these algorithms solve a problem which is constructed differently from ours. The algorithms in [40, 41] decompose a given matrix into two or more low-rank matrices. In our problem, we are principally concerned with finding finding a decomposition with a component matrix common to all given matrices, as motivated by our specific domain application context. Importantly, we allow our common component () to be full-rank, rather than decomposing into two low-rank matrices.
As noted, data-driven decomposition of neuromodulatory effects on mesoscale dynamical models has not been widely studied in the computational neural modeling community. Several authors have, however, done work with a similar motivation for taking a neuromodulation approach to modeling. Li et al [21] developed a statistical generalized linear model (GLM) construct that embodies a modulatory nonstationary architecture for neuronal-level modeling. They specifically imposed a level of similarity between switched models by incorporating a Gaussian prior onto the weight matrices of their GLMs. They further incorporated a decomposition of connection strength from connection direction – decomposing their weight matrices into a direction component in {−1, 0, 1}, and a strength component in . Having done this decomposition, they also impose a Gumbel-Softmax prior on the direction component, minimizing the number of connections that switch direction with different regimes. In contrast, working at the meso-scale, we enforce a direction mask onto our effective connectivity matrices which does not change over time, and decompose the strength of each matrix into a common component and a switched component .
There are certainly many biophysical models that have engaged neuromodulation from a bottom-up perspective, especially at neuronal and small-circuit scales, e.g., [42, 43, 44], which can generate inferences and predictions at the meso- and macro-scale [45, 46]. We view our contribution here as a methodological enabling of such approaches in the sense that our framework takes mesoscale data and performs an inference problem to arrive at a mesoscale model of neural modulation.
4.2. Modulated MINDy provides individualized inference of modulated whole-brain dynamics
Our developed approach represents a generalization of our mesoscopic individualized neural dynamics (MINDy) framework [8, 9]. Specifically, rather than identifying a single dynamical regime, the generalized framework proposed here allows for spatially relevant representations of both the base connectivity common to all regimes as well as modulation that may vary with time and brain state. In this sense, this modulated model architecture is also multi-timescale, as the population activity state changes on a much faster timescale than the selection index . Importantly, our model architecture is biologically interpretable, providing insight into excitatory-inhibitory dynamics not provided by vanilla RNN architectures. Modulated MINDy accurately estimates known latent dynamics from synthetic data, and infers reliable and individualized models when tested on human data. Modulated MINDy is also scalable, requiring only ~10 minutes to fit 30 minutes of 20-channel EEG data with 3 modulation matrices.
4.3. Modulated MINDy as a tool to infer interpretable neuromodulation
As a proof of concept, we tested modulated MINDy on open-source, labeled EEG recordings of subjects receiving the general anesthetic drug propofol. We found that our models had modulation structures consistent with prior literature on propofol sedation, including promoting inhibition along the posterior-anterior spatial axis. Thus, modulated MINDy provides spatiotemporal models which are not only accurate and reliable, but can be interpreted to gain mechanistic insight into an individual’s brain dynamics and how they are modulated over time or as a function of exogenous factors or inputs. We emphasize that our applicative example in anesthesia was not intended to make a specific scientific point about propofol per se, but rather as a methdological validity test.
We envision a number of applicative contexts for the proposed method, for both basic scientific and clinical questions. Modulated MINDy can be used in a task context, to infer modulations unique to specific cognitive functions. It could be used, as in this proof of concept, to characterize the spatiotemporal effects of an exogenous intervention, either pharmacological or otherwise. Additionally, it could be used in clinical contexts, to associate different modulation patterns with various states of pathology, e.g., seizures, coma, or ischemia. Ultimately, modulated MINDy is a modeling tool to infer changes that underlie non-stationary brain recordings, where the non-stationary regimes are labeled in the data.
4.4. Limitations
We note a few important limitations of the proposed approach. Most notably, as outlined in our introduction, our approach tackles the problem of inferring what is modulated within network dynamics, and not the companion problem of inferring when such modulation has occurred. In other words, we require here known demarcation or labeling of nonstationary regimes. Future work will engage the challenge of inferring both modulation and regime engagement, simultaneously. Our model itself is formulated at the mesoscale, with clear abstraction of cellular and sub-cellular dynamics. These assumptions could in principle be generalized, though that would lead to increasingly computational complexity regarding the ensuing inference problem.
4.5. Conclusion
In conclusion, we have presented modulated MINDy, a framework for fitting multiscale, modulated, mesoscale models of brain dynamics to individual data, and have validated it on both ground truth synthetic and actual human EEG data. In the future, we plan to extend this model by enabling modulated inference on data where the modulatory states are unlabeled, i.e., where there is a need to infer also the points at which the non-stationary regimes within the data change. We anticipate that modulated MINDy’s ability to give mechanistic inference will make it a powerful tool for analysis in many neuroscientific and clinical contexts.
Acknowledgments
Portions of this work were supported by grants R01NS130693 and 5T32NS126157–02 from the US National Institutes of Health.
References
- [1].Babadi Behtash and Brown Emery N.. A review of multitaper spectral analysis. IEEE Transactions on Biomedical Engineering, 61:1555–64, 2014. ISSN 15582531. doi: 10.1109/TBME.2014.2311996. [DOI] [PubMed] [Google Scholar]
- [2].Blinowska Katarzyna and Durka Piotr. Electroencephalography (eeg). Wiley encyclopedia of biomedical engineering, 10:9780471740360, 2006. [Google Scholar]
- [3].St Louis Erik K, Frey Lauren C, Britton Jeffrey W, Hopp Jennifer L, Korb Pearce, Koubeissi Mohamad Z, Lievens William E, and Pestana-Knight Elia M. The normal eeg. In Electroencephalography (EEG): An introductory text and atlas of normal and abnormal findings in adults, children, and infants [Internet]. American Epilepsy Society, 2016. [PubMed] [Google Scholar]
- [4].Marcuse LV, Schneider M, Mortati KA, Donnelly KM, Arnedo V, and Grant AC. Quantitative analysis of the eeg posterior-dominant rhythm in healthy adolescents. Clinical neurophysiology, 119(8):1778–1781, 2008. [DOI] [PubMed] [Google Scholar]
- [5].Sanzleon Paula, Knock Stuart A., Woodman M. Marmaduke, Domide Lia, Mersmann Jochen, Mcintosh Anthony R., and Jirsa Viktor. The virtual brain: A simulator of primate brain network dynamics. Frontiers in Neuroinformatics, 7(MAY), jun 2013. ISSN 16625196. doi: 10.3389/fninf.2013.00010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Schirner Michael, Rothmeier Simon, Jirsa Viktor K., Mcintosh Anthony R., and Ritter Petra. An automated pipeline for constructing personalized virtual brains from multimodal neuroimaging data. NeuroImage, 117:343–357, 2015. [DOI] [PubMed] [Google Scholar]
- [7].Breakspear Michael. Dynamic models of large-scale brain activity. Nature Neuroscience, 20(3):340–352, 2017. [DOI] [PubMed] [Google Scholar]
- [8].Singh Matthew F., Braver Todd S., Cole Michael W., and Ching ShiNung. Estimation and validation of individualized dynamic brain models with resting state fMRI. NeuroImage, 221, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Singh Matthew F, Braver Todd S, Cole Michael W, and Ching Shinung. Precision data-driven modeling of cortical dynamics reveals person-specific mechanisms underpinning brain electrophysiology. PNAS, 122(3):1–12, 2025. doi: 10.1073/pnas.2409577121/-/DCSupplemental.Published. URL https://doi.org/10.1101/2023.11.14.567088. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Nir Yuval and de Lecea Luis. Sleep and vigilance states: Embracing spatiotemporal dynamics. Neuron, 111(13):1998–2011, 2023. [DOI] [PubMed] [Google Scholar]
- [11].Grigg Omer and Grady Cheryl L. Task-related effects on the temporal and spatial dynamics of resting-state functional connectivity in the default network. PloS one, 5(10):e13311, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [12].Manuca R, Casdagli MC, and Savit RS. Nonstationarity in epileptic eeg and implications for neural dynamics. Mathematical biosciences, 147(1):1–22, 1998. [DOI] [PubMed] [Google Scholar]
- [13].Tyrcha Joanna, Roudi Yasser, Marsili Matteo, and Hertz John. The effect of nonstationarity on models inferred from neural data. Journal of Statistical Mechanics: Theory and Experiment, 2013(03):P03005, 2013. [Google Scholar]
- [14].Glaser Joshua I., Whiteway Matthew, Cunningham John P., Paninski Liam, and Linderman Scott W.. Recurrent switching dynamical systems models for multiple interacting neural populations. Advances in Neural Information Processing Systems, 2020-Decem(NeurIPS):1–12, 2020. ISSN 10495258. [Google Scholar]
- [15].Hu Amber, Zoltowski David, Nair Aditya, Anderson David, Duncker Lea, and Linderman Scott. Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems. (NeurIPS):1–31, 2024. ISSN 10495258. URL http://arxiv.org/abs/2408.03330. [Google Scholar]
- [16].Song Christian Y., Hsieh Han Lin, Pesaran Bijan, and Shanechi Maryam M.. Modeling and inference methods for switching regime-dependent dynamical systems with multiscale neural observations. Journal of Neural Engineering, 19(6), 2022. ISSN 17412552. doi: 10.1088/1741-2552/ac9b94. [DOI] [PubMed] [Google Scholar]
- [17].Song Christian Y. and Shanechi Maryam M.. Unsupervised learning of stationary and switching dynamical system models from Poisson observations. Journal of Neural Engineering, 20(6), 2023. ISSN 17412552. doi: 10.1088/1741-2552/ad038d. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Weng Geyu, Clark Kelsey, Akbarian Amir, Noudoost Behrad, and Nategh Neda. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Frontiers in Computational Neuroscience, 18 (January):1–18, 2024. ISSN 16625188. doi: 10.3389/fncom.2024.1273053. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Rabiner Lawrence R.. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77(2):257–286, 1989. ISSN 15582256. doi: 10.1109/5.18626. [DOI] [Google Scholar]
- [20].He Mingjian, Das Proloy, Hotan Gladia, and Purdon Patrick L.. Switching state-space modeling of neural signal dynamics. PLoS Computational Biology, 19(8 August), aug 2023. ISSN 15537358. doi: 10.1371/journal.pcbi.1011395. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Li Chengrui, Soon Ho Kim Chris Rodgers, Choi Hannah, and Wu Anqi. One-Hot Generalized Linear Model for Switching Brain State Discovery. 12th International Conference on Learning Representations, ICLR 2024, pages 1–16, 2024. [Google Scholar]
- [22].Karniol-Tambour Orren, Zoltowski David M., Diamanti E. Mika, Pinto Lucas, Brody Carlos D., Tank David W., and Pillow Jonathan W.. Modeling state-dependent communication between brain regions with switching nonlinear dynamical systems. 12th International Conference on Learning Representations, ICLR 2024, pages 1–21, 2024. [Google Scholar]
- [23].Zhang Yongxu and Saxena Shreya. Inference of Neural Dynamics Using Switching Recurrent Neural Networks. Advances in Neural Information Processing Systems, 37(NeurIPS), 2024. ISSN 10495258. [Google Scholar]
- [24].Kätzel Dennis, Zemelman Boris V., Buetfering Christina, Wölfel Markus, and Miesenböck Gero. The columnar and laminar organization of inhibitory connections to neocortical excitatory cells. Nature Neuroscience, 14(1):100–109, 2011. ISSN 10976256. doi: 10.1038/nn.2687. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Buzsáki György, Anastassiou Costas A., and Koch Christof. The origin of extracellular fields and currents-EEG, ECoG, LFP and spikes. Nature Reviews Neuroscience, 13(6):407–420, 2012. ISSN 1471003X. doi: 10.1038/nrn3241. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Schwamb Addison, Singh Matthew, Guerriero Réjean, and Ching Shi Nung. Data-driven modeling of neural dynamics from EEG to track physiological changes. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pages 1–4, 2024. ISSN 1557170X. doi: 10.1109/EMBC53108.2024.10781777. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Kalman R. E.. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82(Series D):35–45, 1960. [Google Scholar]
- [28].Julier Simon J. and Uhlmann Jeffrey K.. New extension of the Kalman filter to nonlinear systems. Signal Processing, Sensor Fusion, and Target Recognition VI, 3068:182, 1997. ISSN 0277786X. doi: 10.1117/12.280797. [DOI] [Google Scholar]
- [29].Marder Eve and Thirumalai Vatsala. Cellular, synaptic and network effects of neuromodulation. Neural Networks, 15(4–6):479–493, 2002. [DOI] [PubMed] [Google Scholar]
- [30].Dozat Timothy. Incorporating Nesterov Momentum into Adam. ICLR Workshop, (1):2013–2016, 2016. [Google Scholar]
- [31].Mazza Damiano and Pagani Michele. Automatic differentiation in PCF. Proceedings of the ACM on Programming Languages, 5(POPL):1–4, 2021. ISSN 24751421. doi: 10.1145/3434309. [DOI] [Google Scholar]
- [32].Chennu Srivas, Stuart O’Connor Ram Adapa, Menon David K., and Bekinschtein Tristan A.. Brain Connectivity Dissociates Responsiveness from Drug Exposure during Propofol-Induced Transitions of Consciousness. PLoS Computational Biology, 12(1):1–17, 2016. ISSN 15537358. doi: 10.1371/journal.pcbi.1004669. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [33].Chennu Srivas, Orsquo Stuart; Connor Ram Adapa, Menon David K., and Bekinschtein. Research data supporting ldquo;brain connectivity during propofol sedationrdquo;. 2015. doi: 10.17863/CAM.68959. URL https://www.repository.cam.ac.uk/handle/1810/252736. [DOI] [Google Scholar]
- [34].Trapani Giuseppe, Altomare Cosimo, Sanna Enrico, Biggio Giovanni, and Liso Gaetano. Propofol in Anesthesia. Mechanism of Action, Structure-Activity Relationships, and Drug Delivery. Current Medicinal Chemistry, 7(039):249–271, 2000. [DOI] [PubMed] [Google Scholar]
- [35].Vanlersberghe C and Camu F. Propofol, pages 227–252. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008. ISBN 978–3-540–74806-9. doi: 10.1007/978-3-540-74806-9_11. [DOI] [Google Scholar]
- [36].Murphy Michael, Bruno Marie Aurélie, Riedner Brady A., Boveroux Pierre, Noirhomme Quentin, Landsness Eric C., Brichant Jean Francois, Phillips Christophe, Massimini Marcello, Laureys Steven, Tononi Giulio, and Boly Mélanie. Propofol anesthesia and sleep: A high-density EEG study. Sleep, 34(3), 2011. ISSN 15509109. doi: 10.1093/sleep/34.3.283. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [37].Purdon Patrick L., Pierce Eric T., Mukamel Eran A., Prerau Michael J., Walsh John L., Wong Kin Foon K., Salazar-Gomez Andres F., Harrell Priscilla G., Sampson Aaron L., Cimenser Aylin, Ching Shinung, Kopell Nancy J., Casie Tavares-Stoeckel Kathleen Habeeb, Merhar Rebecca, and Brown Emery N.. Electroencephalogram signatures of loss and recovery of consciousness from propofol. Proceedings of the National Academy of Sciences of the United States of America, 110(12), 2013. ISSN 00278424. doi: 10.1073/pnas.1221180110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [38].Vijayan Sujith, Ching ShiNung, Purdon Patrick L, Brown Emery N, and Kopell Nancy J. Thalamocortical mechanisms for the anteriorization of alpha rhythms during propofol-induced unconsciousness. Journal of Neuroscience, 33(27):11070–11075, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [39].Ching ShiNung, Cimenser Aylin, Purdon Patrick L., Brown Emery N., and Kopell Nancy J.. Thalamocortical model for a propofol-induced -rhythm associated with loss of consciousness. Proceedings of the National Academy of Sciences, 107(52):22665–22670, 2010. doi: 10.1073/pnas.1017069108. URL https://www.pnas.org/doi/abs/10.1073/pnas.1017069108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [40].Wertz Samuel, Vandaele Arnaud, and Gillis Nicolas. Efficient algorithms for the hadamard decomposition, 2025. URL https://arxiv.org/abs/2504.13633.
- [41].Ciaperoni Martino, Gionis Aristides, and Mannila Heikki. The hadamard decomposition problem. Data Mining and Knowledge Discovery, 38(4):2306–2347, 2024. [Google Scholar]
- [42].Fellous Jean-Marc and Linster Christiane. Computational models of neuromodulation. Neural computation, 10(4):771–805, 1998. [DOI] [PubMed] [Google Scholar]
- [43].Goldman Mark S, Golowasch Jorge, Marder Eve, and Abbott LF. Global structure, robustness, and modulation of neuronal models. Journal of Neuroscience, 21(14):5229–5238, 2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [44].Marder Eve. Variability, compensation, and modulation in neurons and circuits. Proceedings of the National Academy of Sciences, 108(supplement_3):15542–15548, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [45].Jones Stephanie R, Pritchett Dominique L, Sikora Michael A, Stufflebeam Steven M, Hämäläinen Matti, and Moore Christopher I. Quantitative analysis and biophysically realistic neural modeling of the meg mu rhythm: rhythmogenesis and modulation of sensory-evoked responses. Journal of neurophysiology, 102(6):3554–3572, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [46].Shine James M, Müller Eli J, Munn Brandon, Cabral Joana, Moran Rosalyn J, and Breakspear Michael. Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics. Nature neuroscience, 24(6):765–776, 2021. [DOI] [PubMed] [Google Scholar]




