Skip to main content
Springer logoLink to Springer
. 2022 Feb 19;50(2):241–249. doi: 10.1007/s10827-021-00810-8

Estimating anisotropy directly via neural timeseries

Erik D Fagerholm 1,, W M C Foulkes 2, Yasir Gallero-Salas 3,4, Fritjof Helmchen 3,4, Rosalyn J Moran 1, Karl J Friston 5, Robert Leech 1
PMCID: PMC9035010  PMID: 35182268

Abstract

An isotropic dynamical system is one that looks the same in every direction, i.e., if we imagine standing somewhere within an isotropic system, we would not be able to differentiate between different lines of sight. Conversely, anisotropy is a measure of the extent to which a system deviates from perfect isotropy, with larger values indicating greater discrepancies between the structure of the system along its axes. Here, we derive the form of a generalised scalable (mechanically similar) discretized field theoretic Lagrangian that allows for levels of anisotropy to be directly estimated via timeseries of arbitrary dimensionality. We generate synthetic data for both isotropic and anisotropic systems and, by using Bayesian model inversion and reduction, show that we can discriminate between the two datasets – thereby demonstrating proof of principle. We then apply this methodology to murine calcium imaging data collected in rest and task states, showing that anisotropy can be estimated directly from different brain states and cortical regions in an empirical in vivo biological setting. We hope that this theoretical foundation, together with the methodology and publicly available MATLAB code, will provide an accessible way for researchers to obtain new insight into the structural organization of neural systems in terms of how scalable neural regions grow – both ontogenetically during the development of an individual organism, as well as phylogenetically across species.

Supplementary Information

The online version contains supplementary material available at 10.1007/s10827-021-00810-8.

Keywords: Anisotropy, Neuroimaging, DCM, Data fitting, Lagrangian, Field theory

Introduction

Two of the main concepts upon which computational neuroscience models are based are those of the ‘particle’ (Sears, 1964) and the ‘field’ (McMullin, 2002) – both terms that are inherited from theoretical physics.

Particle theoretic models

In the particle theoretic approach we treat every node within a neural system as a zero-dimensional (point-like) element – a so-called ‘particle’ that evolves in time. The ways in which each one of these neural particles evolve influences the rest of the connected system, such that collectively, the particles form nodes of a dynamically evolving graph (Deco et al., 2008). Particle theoretic frameworks yield experimental advantages for neuroimaging modalities such as electroencephalography (EEG), in which there are usually very few measurement locations. Furthermore, particle theoretic frameworks have computational and statistical advantages for neuroimaging analyses due to associated dimensionality reduction – an attribute that becomes increasingly important for large-scale recordings of neural systems (Izhikevich & Edelman, 2008). However, this computational expediency comes at the cost of losing the spatial information contained in a continuum description.

Field theoretic models

The field theoretic approach treats a neural system as a continuous structure called a ‘field’ that is a function of position, with position treated as a continuous variable. A neural field can exist in 2D or 3D space: it is natural to work in a two-dimensional space when modelling a single cortical sheet or a three-dimensional space for a cross-cortical volume (Breakspear, 2017). In this paper, we use a model that is in essence a compromise between the particle and field models, by taking a continuous field and discretizing it such that we only consider certain points in space – which we henceforth refer to as a discretized field theoretic model.

Isotropic vs. anisotropic systems

A system is said to be isotropic if it looks identical in every direction. This means that if we imagine ourselves standing somewhere within an isotropic structure, then we would see precisely the same structure along all lines of sight. Conversely, anisotropy is a measure of the extent to which a system deviates from perfect isotropy. For example, a sheet of wood is anisotropic due to the preferential directionality of the grain – which we can see by the fact that it is easier to break the wood along the grain than it is to break it against the grain. We present a discretized field theoretic model that allows for the estimation of anisotropy in connected dynamical systems of arbitrary dimensionality. We provide accompanying MATLAB code in a public repository that can be readily used to measure levels of anisotropy on a node-wise basis via timeseries measurements.

Here, we focus on isotropy as the main parameter of interest, as this quantity is usually studied in neuroscience in the context of diffusion tensor imaging (DTI). The latter allows for structural integrity measures of axons, by quantifying the extent to which water molecules diffuse along the axons — i.e., anisotropically. Damage to white matter caused by e.g., traumatic brain injury (TBI) can cause axonal tissue to rupture, resulting in water molecules leaking more isotropically than in an intact axon. The measure of isotropy we propose here, as opposed to DTI, can be estimated directly from any region-specific neuroimaging timeseries. This allows for us to implement the mathematical framework of Lagrangian field theory in the study of a dynamically (as opposed to structurally) derived measure of anisotropy. Furthermore, as we are embedding the isotropy measure into a scalable mathematical framework, we allow for an estimation of how similar (isotropic) or dissimilar (anisotropic) the signals of neighbouring regions are during the growth of neural systems.

Inference of anisotropy

We estimate anisotropy in arbitrary neuronal timeseries, together with hyperparameters that describe the variance of states and parameters, by using Dynamic Expectation Maximisation (DEM) (Friston et al., 2008) within the Statistical Parametric Mapping (SPM) software. This inference method uses generalised coordinates of motion within a Laplace approximation routine of states and parameters (multivariate Gaussians). In contrast with other inference methods, DEM allows us to use four embedding dimensions, which encompass smooth noise processes, unlike e.g., traditional Kalman filters that employ martingale assumptions (Roebroeck et al., 2009).

Specifically, DEM approximates the posterior of a parameter quantifying anisotropy using three steps:

  1. The D step uses variational Bayesian filtering as an instantaneous gradient descent in a moving frame of reference for state estimation in generalised coordinates;

  2. The E step estimates the model parameter quantifying anisotropy by using gradient ascent on the negative variational free energy. The variational free energy (F) combines both accuracy and complexity when scoring models F=logp(y|θ,m)-accuracyKLqθ,pθ|mcomplexity, where logpy|θ,m is the log likelihood of the data y conditioned upon model states, parameters and hyperparameters θ, and model structure m. We seek a model that provides an accurate and maximally simple explanation for the data.

  3. The M step repeats this process for the hyperparameters, given by the precision components of random fluctuations on the states and observation noise (Friston et al., 2008).

Overview

This paper comprises three sections.

In the first, we outline the theoretical foundations of Lagrangian field theory and the form of a generalised scalable discretized equation of motion that can be used both for forward generative models and for model inversion via neural timeseries.

In the second section, we generate in silico data via forward models of an isotropic and an anisotropic system. We then use Bayesian model inversion and subsequent Bayesian model reduction to show that we can correctly discriminate between the isotropic and anisotropic systems – thereby providing construct validation.

In the third section, we use murine data collected in both rest and task states to map levels of anisotropy across different cortical regions directly via the in vivo timeseries. These wide-field calcium imaging data were collected across the left hemisphere of mouse cortex expressing GCaMP6f in layer 2/3 excitatory neurons (Fagerholm et al., 2021; Gallero-Salas et al., 2021; Gilad et al., 2018), with cortical areas aligned to the Allen Mouse Common Coordinate Framework (Mouse & Coordinate, 2016),

We suggest that the presented methodology could be valuable in future large-scale studies of neural systems, in which the quantification of region-wise anisotropy may shed light on how neural systems grow both ontogenetically within the lifespan of an individual animal, as well as phylogenetically across species (Buzsaki et al., 2013).

Methods

We will now cover the technical foundations of the approach, starting with Lagrangian field theory and the principle of stationary action. We then derive a generalised, scalable, discretized field theoretic Lagrangian and consider the empirical estimation of anisotropy under this formulation using empirical (timeseries) data and Bayesian estimators.

Lagrangian field theory

We remind the reader of the basic concepts underlying Lagrangian field theory and the principle of stationary action in Appendix I. In brief: we represent the state of a system by a field which is a function of the 4D space–time position rt,x,y,z. The equations of motion that describe how this field evolves in time are obtained by requiring that the field φr renders the value of a quantity known as the action S stationary:

Sφr=Ωd4rLr,φ,φ 1

The integral in the definition of the action is over the space–time domain Ω encompassing all space from the initial time ti to the final time tf. The integrand, which is known as the Lagrangian density Lr,φ,φ, defines the system of interest as a function of r, the values of the fields φ at r, and their spatiotemporal derivatives φ at r.

Scale transformations

We define a scale transformation as a mapping from arbitrary points in the 5-D space with axes labelled φ,r=(φ,t,x,y,z) to scaled points φs,rs=λφ,λαr, where λ is an arbitrary scale factor and α is a constant. A field configuration φ=φr is a 4-D surface in this 5-D space, and the scale transformation takes points on that surface to points on a new 4D surface, defining a new field configuration. The value of the new field φs at the scaled space–time point rs=λαr is related to the value of the original field φ at the unscaled point r via

φsrs=λφrφsλαr=λφrφsr=φsλ-αr 2

It is convenient to allow different scaling exponents in different space–time directions, so from now on λαr is to be understood as shorthand for the vector λαtt,λαxx,λαyy,λαzz.

Taking partial derivatives of Eq. (2), we obtain:

φsμr=μφsr=λ1-αμφμλ-αr, 3

where λ1-αμ depends only on the μth component of the vector of exponents α=αt,αx,αy,αz. From now on, we denote the vector with components λ1-αμφμλ-αr as λ1-αφ(λ-αr).

Scaling the action

Using Eqs. (1), (2) and (3), we see that the scaled action is given by:

Sφsr=λαttiλαttfdtallx,y,zdxdydzLr,λφλ-αr,λ1-αφλ-αr, 4

We then change variables in Eq. (4), setting r=λ-αr such that:

S[φsr]=λνανtitfdtallx,y,zdxdydzLλαr,λφr,λ1-αφr 5

where λναν is the Jacobian that accounts for the change in integration variables and ναν=αt+αx+αy+αz. The integrals are now over the same space–time region Ω as in the original un-scaled action in Eq. (1), which means that we can re-write Eq. (5) using the same simple notation:

Sφsr=λνανΩd4rLλαr,λφr,λ1-αφr 6

Scalable systems

The action Sφr is said to be scalable, or equivalently to possess ‘mechanical similarity’ (Landau & Lifshitz, 1976) if, for any choice of φr, not just choices that make the action stationary and therefore satisfy the Euler–Lagrange equation of motion (see Appendix I), the following relationship holds:

Sφsr=λnSφr, 7

where n is a constant. Note that a ‘scalable’ system should not be confused with a ‘scale free’ system, which is one that lacks a characteristic length scale, such as those studied in the physics of phase transitions.

More explicitly, we can use Eqs. (6) and (7) to express the scalability condition as follows:

Ωd4rLλαr,λφ(r),λ1-αφ(r)=λn-νανΩd4rLr,φ(r),φ(r) 8

Generalised scalable Lagrangians

We can expand any analytic Lagrangian density Lφ,φ as a power series:

L=a,bt,bx,by,bzCa,bt,bx,by,bzφaφ˙btφxbxφybyφzbz, 9

where Ca,bt,bx,by,bz is an expansion coefficient and the summations over the integers a,bt,bx,by,bz range from 0 to . We have assumed for the sake of simplicity that the Lagrangian density has no explicit dependence on r. This is normally the case when the system of interest is not driven by external forces or other influences. We next use Eq. (9) to obtain Lλφ,λ1-αφ:

Lλφ,λ1-αφ=a,bt,bx,by,bzCa,bt,bx,by,bzλa+ν(1-αν)bνφaφ˙btφxbxφybyφzbz 10

For the action to be scalable, Eq. (8) tells us that Eq. (10) must equal:

λn-νανa,bt,bx,by,bzCa,bt,bx,by,bzφaφ˙btφxbxφybyφzbz 11

We conclude that the action is scalable if and only if

a+ν(1-αν)bν=n-ναν 12

for all choices of the integers a and bν at which Ca,bt,bx,by,bz is non-zero. If, for example, we consider possible contributions to the Lagrangian with specific values of bt, bx, by, and bz, Eq. (12) tells us that Ca,bt,bx,by,bz must be zero unless a=n-ναν-ν(1-αν)bν. The value of a is determined by the values of bt, bx, by, bz and the summation over a is no longer required. The generalised scalable discretized field theoretic Lagrangian may therefore be written as follows:

L=bt,bx,by,bzCbt,bx,by,bzφn+ν(αν-1)bν-ανφ˙btφxbxφybyφzbz 13

The anisotropic wave equation

Let us now design a special case of Eq. (13) in two spatial dimensions. Having chosen to set αt=αx and n=αy+2, we construct a Lagrangian density with three non-zero terms. In the first term, Cbt,bx,by=1, bt=2, and bx=by=bz=0; in the second term, Cbt,bx,by=-1, bx=2, and bt=by=bz=0; and finally, in the third term,Cbt,bx,by=-1, by=2, and bt=bx=bz=0.

This yields the Lagrangian density:

L=φ˙2-φx2-φ2βφy2, 14

where the exponent β, which is by defined by β=αy-αx, quantifies the degree of anisotropy, such that the system is perfectly isotropic when β=0αy=αx. To provide intuition for the way in which the β parameter affects the system’s dynamics, we run a series of forward models for a range of values in Supplementary Fig. 1.

The corresponding equation of motion – the two-dimensional Euler–Lagrange equation (see Appendix I) – is:

φ¨=φxx+φ2βφyy+βφ2β-1φy2 15

We can verify that if φt,x,y is a solution of this equation, so is the scaled field φst,x,y=λφλ-αtt,λ-αxx,λ-αyy= λφλ-αxt,λ-αxx,λ-(αx+β)y (see Appendix II).

In the case of an isotropic system, when β=0, the Euler–Lagrange equation becomes:

φ¨=φxx+φyy 16

We see that the Lagrangian density of Eq. (14) leads to an equation of motion (15) that reduces to the wave Eq. (16) in the case of an isotropic system. This makes it an intuitive test case.

Spatial discretisation

For Eq. (15) to be used in the modelling of neural timeseries we must first discretize the partial spatial derivatives. This is necessary because we do not deal with spatially continuous data in neuroimaging, but rather with data collected at a discrete set of points. We therefore make the following standard transformations:

φy12φx,y+1-φx,y-1,φxxφx+1,y-2φx,y+φx-1,y,φyyφx,y+1-2φx,y+φx,y-1, 17

where φx,y is the value of the field at the point x,y and, e.g., φx+1,y is the value of the field one ‘step’ in the positive x direction in the graph from the point x,y.

Applying the transformations to Eq. (15), we obtain:

φ¨x,y=φx+1,y-2φx,y+φx-1,y+φx,y2βφx,y+1-2φx,y+φx,y-1+βφx,y2β-112φx,y+1-φx,y-12, 18

which we split into two first-order differential equations by defining a new variable to obtain the final form of the equations of motion used in all subsequent forward models and Bayesian model inversions presented in this paper:

φ˙=θθ˙x,y=φx+1,y-2φx,y+φx-1,y+φx,y2βφx,y+1-2φx,y+φx,y-1+βφx,y2β-112φx,y+1-φx,y-12 19

We then use Eq. (19) as the equations of motion for subsequent model inversion with the Statistical Parametric Mapping (SPM) software. The associated observer equation comprises the φ variables – i.e., we assume that the strength of the field φx,y is what is being measured in the neural timeseries at the x,y coordinate.

Equation (19) is the basis of the MATLAB code made available for the use with forward generative models, as well as with Bayesian model inversion of timeseries of arbitrary dimensionality from any neuroimaging modality.

Synthetic data

We consider a 2D grid of size 3 × 3 pixels, where each of the nine pixels is given different initial conditions and subsequently allowed to evolve according to the equation of motion in Eq. (19). We run two forward models: a) one isotropic case in which β=0; and b) one anisotropic case in which β0. Having created these synthetic data with a prior of β=0, we then perform Bayesian model inversion using Dynamic Expectation Maximization (DEM) (Friston et al., 2008) to infer the latent states and estimate both the β parameter, as well as the hyperparameters – the components of precision of random fluctuations on the observation noise and states. Model inversion for any discrete system can be performed on a node-by-node basis, by considering the ways in which the dynamics evolve in the immediate neighbourhood of the node under consideration. When this model is equipped with fluctuations one can use standard (Variational Laplace) Bayesian model inversion procedures to estimate the exponents for any given timeseries. We then set the prior for the free parameter β to zero and use DEM to obtain a posterior estimate for β from both the synthetic isotropic and anisotropic data. Following model inversion, we use Bayesian model reduction (Friston et al., 2015; Rosa et al., 2012) to test the evidence for a perfectly isotropic system in which β=0 by setting the prior variances of β to zero. We are therefore able to test whether we can correctly identify the ground truth isotropic data (created with β=0) with a higher evidence for the reduced model and conversely whether we can correctly identify the ground truth anisotropic data (created with β0) with a higher evidence for the full model.

Empirical data

All animal experiments were carried out according to the guidelines of the Veterinary Office of Switzerland following approval by the Cantonal Veterinary Office in Zürich. All murine calcium imaging data were collected as previously reported (Fagerholm et al., 2021; Gallero-Salas et al., 2021; Gilad et al., 2018). As with the synthetic data, we perform Bayesian model inversion to obtain posterior estimates for the β parameter quantifying the extent to which the time series for each pixel deviate from isotropy at β=0. We perform this model inversion once for every second pixel n=6651 within each trial n=10, mouse n=3 and condition (n=2, task and rest). Following model inversion, we average the posteriors for the β parameter across trials to obtain results per mouse and condition and then average these posteriors once more across mice. We then filter these averaged images with 2-D Gaussian moving average smoothing kernels with a semi-width window of 7 pixels.

Results

We show the ways in which the synthetic timeseries evolve for the isotropic (Fig. 1A) and anisotropic (Fig. 1B) cases. Each of the regions in the 3×3 synthetic data is given a different initial value and rate of change ranging between 1 and 2, to initialize dynamics in the absence of external inputs. Exact values of model parameters, boundary conditions, and initial values are listed in the publicly available code. Following model inversion and reduction, we then demonstrate proof of principle by showing that there is higher evidence for the ground truth isotropic data having been created with the isotropic model (Fig. 1C) and conversely for the ground truth anisotropic data having been created with the anisotropic model (Fig. 1D). The results in Figs. 1C and D remain when initial conditions and observation noise are varied by 10%, as commented in the publicly available code. Finally, we show the different degrees of anisotropy in the murine calcium imaging data in rest and task states (Fig. 1E), together with averaged signals (Fig. 1F). Note that the negative posterior β values in Fig. 1E are a result of the specific choice of model parameters, observation noise, and initial conditions used.

Fig. 1.

Fig. 1

Synthetic and experimental data. A Synthetic data generated using the isotropic model in Eq. (19) with β=0. The colours of the wavefronts correspond to pixels in the grid inset top right. The x and y axes show the amplitudes of the wavefronts multiplied by cos(time) and sin(time), respectively. B Synthetic data generated using the anisotropic model in Eq. (19) with β=-3. The colours of the wavefronts correspond to pixels in the grid inset top right. The x and y axes show the amplitudes of the wavefronts multiplied by cos(time) and sin(time), respectively. C Approximate lower bound log model evidence given by the free energy F following Bayesian model reduction for isotropic i and anisotropic a models using the isotropic ground-truth data. Corresponding probabilities p derived from the log evidence are shown in the inset on the right. D Approximate lower bound log model evidence given by the free energy F following Bayesian model reduction for isotropic i and anisotropic a models using the anisotropic ground-truth data. Corresponding probabilities p derived from the log evidence are shown in the inset on the left. E) Left hemisphere of calcium imaging data collected in three mice (first three rows) in rest (left column) and task (right column) states. The final fourth row shows average values across the three mice. The colour bars indicate the value of the β exponent ranging from isotropic i to increasingly anisotropic a pixels. F Timecourses of normalized signal intensity z averaged across all pixels, with the layout corresponding to that in E i.e., for each of the three mice (first three rows) across the two states (columns), together with signals averaged across mice (last row)

Overall, there is a marked variability in the degrees of anisotropy across mice and states. On the other hand, the secondary motor cortices show consistently high degrees of anisotropy across mice and states. The generation (Fig. 1A and B) and inversion (Figure C and D) of the synthetic data can be fully reproduced with the accompanying MATLAB code and the murine calcium imaging data in Fig. 1E and F are made available in a public repository.

Discussion

We present a theoretical framework, together with a practical numerical analysis, designed for the estimation of anisotropy in arbitrary timeseries from any connected dynamical system. The basis for this framework rests upon classical Lagrangian field theory applied to scalable (mechanically similar) dynamical systems. Scalability entails a situation whereby a system continues to obey the same equation of motion as it changes size. In other words, a dynamical system that grows or shrinks will begin producing states that are different from those of the original (unscaled) system. However, in systems possessing the property of scalability, the new states in the scaled system will still be described by the same equation of motion used to describe the original (unscaled) system.

It stands to reason that the dynamical evolution of the signals propagating in neural systems possess some form of scalability, given that evolutionary processes add new neuroanatomy to existing structures i.e., the same basic architecture extends itself whist maintaining information processing capabilities (Douglas & Martin, 1991; Hilgetag et al., 2000; Markov et al., 2013). Similarly, a neural structure changes scale during development, whilst allowing for information processing to remain intact. It is therefore of interest to develop tools that allow for the analysis of scalable systems. We therefore pose the following question: given that neural systems possess some form of scalability, what are the consequences thereof and what further questions present themselves? It is in this spirit that we present a formalism that applies to any scalable dynamical system that is sufficiently general to accommodate the evolution of any (driven or non-driven) system in any number of spatial dimensions.

With reference to the murine calcium imaging results, we note the following three main results: a) there is a high variability in anisotropy across mice and states, which may be due to the low number of trials and/or to the fact that anisotropy values may vary from trial to trial depending on neural activity; b) there is no clear difference in anisotropy across rest/task states; and c) there are consistently high levels of anisotropy in the secondary motor cortex (Mos) across mice and states, which may reflect the non-homogeneous nature of local networks.

We construct a methodology that is set within the Bayesian model inversion scheme of Dynamic Causal Modelling (DCM). This means that, by using the timeseries measured from any (e.g., neuroimaging) modality, we obtain posterior estimates of spatial stretch factors – one for each spatial dimension. The discrepancy between these stretch factors then directly provides an estimate of the anisotropy at every voxel in the neuroimaging data. We thus obtain an intuitive understanding of what these measures mean by imagining ourselves standing at a certain node in a neural system. If the system is isotropic then, as we look in every direction – up-down, left–right, and back-forward, we will see no difference in the ways in which the signals evolve in time in these different directions. On the other hand, if the system is anisotropic then we will see a difference in our lines of sight along the different axes – and the greater this difference becomes the greater the degree of anisotropy. We demonstrate proof of principle by generating synthetic data using a special case of the generalised Lagrangian that reduces to the wave equation in the limit of the perfectly isotropic case. Using Bayesian model inversion and reduction, we show that we can correctly identify which of the two models (anisotropic and isotropic) were used to generate each dataset, thus showing the discriminatory ability of the proposed methodology.

It should be noted that, in addition to the Lagrangian framework presented here, the Martin–Siggia–Rose–DeDominicis–Janssen (MSRDJ) is an alternative formalism that allows for stochastic differential equations (e.g., the Langevin equation) to be solved probabilistically (Martin et al., 1973). In contrast to our methodology, in which we examine a single solution of a stochastic differential equation, the MSRDJ framework encompasses a probability distribution of possible solutions and the ways in which they evolve in time (Chow & Buice, 2015). Furthermore, it should be noted that whereas resting-state brain dynamics are usually modelled by equilibrium fluctuations about a steady-state (i.e., a stable point attractor), here we employ a limit-cycle model. The choice of which type of model (e.g., fixed point vs. limit cycle) will naturally depend on the nature of the system under consideration.

An empirically determined estimation of anisotropy could be informative in imaging neuroscience, as it facilitates a direct empirical measurement of how sub-structures within the brain grow (under the assumption of scalability). This provides a new way of assessing the ways in which anatomy changes across both an evolutionary timeline, as well as across the lifespan of an individual organism. It is our hope that the theory, methodology, and accompanying tools will allow for these kinds of questions to be addressed by researchers and that these will lead to a clearer understanding of the spatial dependencies, growth, and development of neural systems.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Acknowledgements

E.D.F. was supported by a King’s College London Prize Fellowship; K.J.F. was funded by a Wellcome Principal Research Fellowship (Ref: 088130/Z/09/Z); R.J.M was funded by the Wellcome/EPSRC Centre for Medical Engineering (Ref: WT 203148/Z/16/Z); R.L was funded by the MRC (Ref: MR/R005370/1). The authors would also like to acknowledge support from the Data to Early Diagnosis and Precision Medicine Industrial Strategy Challenge Fund, UK Research and Innovation (UKRI), the National Institute for Health Research (NIHR), the Biomedical Research Centre at South London, the Maudsley NHS Foundation Trust, and King’s College London.

Appendix I

The classical real scalar field of interest in this work depends on position and time, and it turns out to be convenient to treat it as a function of the four-dimensional Cartesian vector

rt,x,y,z, 20

where t is time, and x,y,z are spatial length, width, and height coordinates, respectively. Individual components of the vector r are written rμ, with μ any element of the set t,x,y,z. The field φ is a function of r:

φ=φr. 21

The vector φ of partial derivatives of φ at r is given by

φ=tφ,xφ,yφ,zφ 22

and its components are written μφ or, more simply, φμ.

The central quantity in Lagrangian field theory is the Lagrangian density,L, which is a function of r,φ,φ:

L=Lr,φ,φ 23

Note that we have not yet assumed any relationship between the values of r, φ and φ; the Lagrangian density can be evaluated for any choices of the 9 real numbers required to specify the scalar field φ and the two four-component vectors rμ and μφ.

Given a particular choice of field ‘trajectory’ φr, the standard definition of the action as a functional of φr is:

Sφr=Ωd4rLr,φ,φ 24

A trajectory in this context consists of the values of the field φr=φ(t,x,y,z) at all spatial points (x,y,z) and all times t between a chosen initial time ti and a chosen final time tf. The four-dimensional integration volume Ω coincides with the region in which the trajectory is defined. Note that we are now assuming that the field φ and its derivatives φμ are functions of r, so φ and φ are now related to each other. We are also assuming that φ and φμ tend to zero as the spatial distance d=x2+y2+z2 from the origin tends to infinity.

The principle of stationary action tells us that the evolution of φr between the initial and final times, ti and tf, renders the action stationary with respect to all variations

δφr=δφt,x,y,z 25

that vanish when t=ti and t=tf.

Using Eq. (24), we evaluate the variation of the action as follows:

δS=Ωd4rLφδφ+Lφμδφμ=Ωd4rLφδφ+Lφμμδφ=Ωd4rLφδφ+μLφμδφ-μLφμδφ, 26

where we are using the Einstein summation convention according to which terms in which the same suffix appears twice are automatically summed over all four values of that suffix. We next convert the middle term on the second line to a surface integral by using the 4-D version of the divergence theorem to obtain:

δS=Ωd4rLφ-μLφμδφ+ΩLφμδφdSμ, 27

where Ω is the surface of the 4-D volume Ω and dSμ is an element of the 3-D surface of Ω. If the field decays to zero rapidly enough as the spatial distance from the origin tends to infinity, and remembering that δφ=0 when t=ti and t=tf, the surface integral vanishes, and we obtain:

δS=Ωd4rLφ-μLφμδφ=0 28

Since δφ is arbitrary except for the constraint that it vanishes at the surface Ω, the principle of stationary action δS=0 implies that the fields evolve according to the Euler–Lagrange equation:

Lφ-μLφμ=0, 29

or more explicitly:

Lφ-tLφ˙-xLφx-yLφy-zLφz=0 30

Appendix II

To show that the scaled field φs=λφλ-αxt,λ-αxx,λ-αx+βy is also a solution of Eq. (15), we differentiate φs twice with respect to time:

2φst2=λ1-2αx2φt2 31

as well as twice with respect to the x coordinate:

2φsx2=λ1-2αx2φx2 32

and twice with respect to the y coordinate:

2φsy2=λ1-2αx+β2φy2 33

We then take raise φs to the power of 2β:

φs2β=λ2βφ2β 34

which, together with Eq. (33), means that:

φs2β2φsy2=λ2βλ1-2αx+βφ2β2φy2=λ1-2αxφ2β2φy2 35

We then differentiate φs once with respect to the y coordinate and take the square:

φsy2=λ21-αx+βφy2 36

which, together with Eq. (34), means that:

βφs2β-1φsy2=βλ2β-1λ21-αx+βφ2β-1φy2=βλ1-2αxφ2β-1φy2 37

Therefore, by using Eqs. (31), (35) and (37) with the original equation of motion in Eq. (15):

λ2αx-12φst2=λ2αx-12φsx2+φs2β2φsy2+βφs2β-1φsy2 38

where the λ2αx-1 factor cancels, leaving the form of the original equation of motion in Eq. (15):

2φst2=2φsx2+φs2β2φsy2+βφs2β-1φsy2 39

We have therefore shown that if φt,x,y is a solution of Eq. (15), so is the scaled field

φst,x,y=λφλ-αtt,λ-αxx,λ-αyy=λφλ-αxt,λ-αxx,λ-(αx+β)y

Author contributions

All authors conceived of the analysis and wrote the paper.

Funding

Medical research council.

Data availability

The MATLAB code used to produce Fig. 1A-D is made available at the following public repository: github.com/allavailablepubliccode/anisotropy.

All pre-processed murine calcium imaging data used to produce Fig. 1E and F are made available in the following public repository: figshare.com/articles/Murine_calcium_imaging_data/12012852.

Declarations

Conflict of interest

The authors declare no conflict of interest.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Breakspear, M. (2017). Dynamic models of large-scale brain activity. Nature Neuroscience, 20(3):340–52. Epub 2017/02/24. 10.1038/nn.4497. PubMed PMID: 28230845. [DOI] [PubMed]
  2. Buzsaki, G., Logothetis, N., Singer, W. (2013). Scaling brain size, keeping timing: evolutionary preservation of brain rhythms. Neuron, 80(3):751–64. Epub 2013/11/05. 10.1016/j.neuron.2013.10.002 PubMed PMID: 24183025; PubMed Central PMCID: PMCPMC4009705. [DOI] [PMC free article] [PubMed]
  3. Chow CC, Buice MA. Path integral methods for stochastic differential equations. The Journal of Mathematical Neuroscience (JMN). 2015;5(1):1–35. doi: 10.1186/s13408-015-0018-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Deco, G., Jirsa,V. K., Robinson, P. A., Breakspear, M., Friston, K. (2008). The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Computational Biology, 4(8):e1000092. [DOI] [PMC free article] [PubMed]
  5. Douglas, R.J., Martin, K.A. (1991). A functional microcircuit for cat visual cortex. The Journal of physiology, 440:735–69. PubMed PMID: 1666655; PubMed Central PMCID: PMC1180177. [DOI] [PMC free article] [PubMed]
  6. Fagerholm, E. D., Foulkes, W. M. C., Gallero-Salas, Y., Helmchen, F., Friston, K. J., Leech, R, et al. (2021). Neural Systems Under Change of Scale. Frontiers in Computational Neuroscience, 5(33). 10.3389/fncom.2021.643148. [DOI] [PMC free article] [PubMed]
  7. Friston, K.J., Litvak, V., Oswal, A., Razi, A., Stephan, K.E., van Wijk, B.C.M, et al. (2016). Bayesian model reduction and empirical Bayes for group (DCM) studies. Neuroimage, 128:413–31. Epub 2015/11/17. 10.1016/j.neuroimage.2015.11.015. PubMed PMID: 26569570; PubMed Central PMCID: PMCPMC4767224. [DOI] [PMC free article] [PubMed]
  8. Friston, K. J., Trujillo-Barreto, N., Daunizeau, J. (2008). DEM: a variational treatment of dynamic systems. Neuroimage, 41(3):849–85. Epub 2008/04/25. 10.1016/j.neuroimage.2008.02.054. PubMed PMID: 18434205. [DOI] [PubMed]
  9. Gallero-Salas, Y., Han, S., Sych, Y., Voigt, F. F., Laurenczy, B., Gilad, A, et al. (2021). Sensory and behavioral components of neocortical signal flow in discrimination tasks with short-term memory. Neuron 109(1):135–48. e6. [DOI] [PubMed]
  10. Gilad, A., Gallero-Salas, Y., Groos, D., Helmchen, F. (2018). Behavioral strategy determines frontal or posterior location of short-term memory in neocortex. Neuron, 99(4):814–28. e7. [DOI] [PubMed]
  11. Hilgetag, C.C., O'Neill, M.A., Young, M.P. (2000). Hierarchical organization of macaque and cat cortical sensory systems explored with a novel network processor. Philosophical Transactions of the Royal Society B: Biological Sciences, 355(1393):71–89. PubMed PMID: PMC1692720. [DOI] [PMC free article] [PubMed]
  12. Izhikevich EM, Edelman GM. Large-scale model of mammalian thalamocortical systems. Proceedings of the National Academy of Sciences. 2008;105(9):3593–3598. doi: 10.1073/pnas.0712231105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Landau, L. D., Lifshitz, E. M. (1976). Mechanics (third edition), Vol. 1 of Course of Theoretical Physics. Oxford: Pergamon Press.
  14. Markov N, Ercsey-Ravasz M, Van Essen D, Knoblauch K, Toroczkai Z, Kennedy H. Cortical high-density counterstream architectures. Science. 2013;342(6158):1238406. doi: 10.1126/science.1238406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Martin PC, Siggia E, Rose H. Statistical dynamics of classical systems. Physical Review a. 1973;8(1):423. doi: 10.1103/PhysRevA.8.423. [DOI] [Google Scholar]
  16. McMullin E. The origins of the field concept in physics. Physics in Perspective. 2002;4(1):13–39. doi: 10.1007/s00016-002-8357-5. [DOI] [Google Scholar]
  17. Mouse and Coordinate. (2016). Allen mouse common coordinate framework. http://help.brain-map.org/download.
  18. Roebroeck, A., Formisano, E., Goebel, R. (2011). The identification of interacting networks in the brain using fMRI: Model selection, causality and deconvolution. Neuroimage, 58(2):296–302. Epub 2009/09/30. 10.1016/j.neuroimage.2009.09.036. PubMed PMID: 19786106. [DOI] [PubMed]
  19. Rosa, M.J., Friston, K., Penny, W. (2012). Post-hoc selection of dynamic causal models. Journal of Neuroscience Methods, 208(1):66–78. Epub 2012/05/09. 10.1016/j.jneumeth.2012.04.013. PubMed PMID: 22561579; PubMed Central PMCID: PMCPMC3401996. [DOI] [PMC free article] [PubMed]
  20. Sears, F.W. (1964). University Physics, Complete.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The MATLAB code used to produce Fig. 1A-D is made available at the following public repository: github.com/allavailablepubliccode/anisotropy.

All pre-processed murine calcium imaging data used to produce Fig. 1E and F are made available in the following public repository: figshare.com/articles/Murine_calcium_imaging_data/12012852.


Articles from Journal of Computational Neuroscience are provided here courtesy of Springer

RESOURCES