Skip to main content
Proceedings. Mathematical, Physical, and Engineering Sciences logoLink to Proceedings. Mathematical, Physical, and Engineering Sciences
. 2018 Nov 21;474(2219):20180527. doi: 10.1098/rspa.2018.0527

Optimizing information transmission rates drives brain gyrification

S Heyden 1,, M Ortiz 1,2
PMCID: PMC6283902

Abstract

We investigate the functional optimality of the cerebral cortex of an adult human brain geometry. Unlike most competing models, we postulate that the cerebral cortex formation is driven by the objective of maximizing the total information transmission rate. Starting from a random path model, we show that this optimization problem is related to the Steklov eigenvalue problem. Combining realistic brain geometries with the finite-element method, we calculate the underlying Steklov eigenvalues and eigenfunctions. By comparison to a convex hull approximation, we show that the adult human brain geometry indeed reduces the Steklov eigenvalue spectrum and thus increases the rate at which information is exchanged between points on the cerebral cortex. With a view to possible clinical applications, the leading Steklov eigenfunctions and the resulting induced magnetic fields are computed and reported.

Keywords: cerebral cortex formation, Steklov eigenvalue problem, functional optimality

1. Introduction

The correlation between brain structure and brain function constitutes an area of long-standing and topical research with many questions still unanswered, primarily due to the complex underlying phenomena arising at different scales. Solutions have been sought at the intersection of various scientific disciplines, including investigations of the structural, functional and effective connectivity at the cell level as well as studies focusing on the overall brain architecture at the continuum level.

The diversity of investigations is grounded in the scale-spanning nature of the human brain: at the macroscopic scale (usually treated by methods of continuum mechanics), the average male human brain volume is about 1260 cm3 [1]. Its complex convoluted structure of the cerebral cortex consists of sulci (inward folds) and gyri (outward folds) with an average cortical thickness of 2.7 mm [2]. Most of the cerebral surface of the human brain is composed of association cortices, which receive input from the primary and secondary motor and sensory cortices, the thalamus and the brainstem [3]. At the neuronal level, the average adult male human brain comprises an estimated 86 billion neurons [4], which are situated within the grey matter. Connections between neurons through the underlying white matter are facilitated via axons. Junctions along axons, called synapses, are the locus at which cells connect with each other. Owing to the profuse degree of axonal branching, a total of approximately 0.15 quadrillion synapses are found within the human connectome [4].

Graph theory represents the main investigation tool of structural and functional connections. Here, structural connections remain of undirected character, and diffusion-weighted imaging is applied as a common investigation tool. Using a tensor-based morphometry study, structural variability of the association cortex is found to be less influenced by genetic factors [5], which permits postnatal environmental factors to diversify neural connections. Evidence further suggests that a higher degree of variability in structural connections is found in neural systems subserving higher-order association and integration processes [6].

In contrast to structural connectivity, functional connectivity rests upon the assumption of abundant structural connections between neurons and examines the flux of information between remote sites, manifested as the statistical dependence of localized neural activity on a distant input signal. With the aid of resting state functional MRI scans, variability in functional connectivity is found to be rooted in evolutionary cortical expansion. Furthermore, variability in functional connectivity is shown to be uncorrelated with cortical thickness but related to variability in sulcal depth, suggesting that functional variability is associated with cortical folding [6].

Finally, effective connectivity, which is analysed at the neuronal level of biophysics, studies brain functionality within a model-dependent framework. Within this framework, signal patterns arising from specific input scenarios are scrutinized. Therefore, functional connectivities are context-sensitive and of directed character [7].

At the continuum level, the specific surface morphology of the cerebral cortex and its degree of folding (gyrification) is found to scale universally with surface area and thickness [8]. Moreover, variability in gyrification patterns are shown to be independent of brain size [911]. Using surface-based analyses, the structural variability of sulci and gyri are revealed to be higher in association areas than in the motor cortex [12], which supports the hypothesis of a higher degree of structural variability in higher-order association processes. In addition to scaling relations, special attention has been devoted to the cerebral folding mechanism itself within the framework of biomechanics. Here, the differential growth hypothesis represents a prominent modelling approach, based on which differential growth between infragranular and supragranular layers during brain development is assumed to induce cortical folding [11,13]. Axonal tension along axons connecting neurons within the grey matter may furthermore act as a possible secondary mechanism [14].

In a recent work published by the authors [15], graph theory is passed to the continuum limit to study the functional optimality of cerebral cortex formation. Thereby, the investigation is tailored towards the human brain based on the working hypothesis that the human brain has evolved under the evolutionary pressure of maximizing its rate of information transmission (as one possible local minimum within the spectrum of different species evolving to different evolutionary pressures). The resulting mathematical model assumes folding patterns to be governed by a maximization of the overall data transfer rate and thus combines cerebral cortex shape at the continuum scale with the underlying network topology. Here, we employ the model to compute the Steklov eigenvalues and eigenfunctions of a realistic cerebral cortex geometry as well as those of a smooth convex hull approximation. We show that the gyrification pattern of the human brain indeed reduces Steklov eigenvalues and thus increases the overall data transmission rate.

2. Mathematical model

This section summarizes the mathematical framework on both the discrete as well as continuum scale as first presented in [15]. Here, we review the theoretical fundamentals to the degree necessary for subsequent discussions. Based on a random path model, we relate the transmission rate between two points on the cerebral cortex to the discrete Laplacian. Passing to the continuum limit, the classical Steklov eigenvalue problem is recovered from a maximization of the overall data transfer rate for a prescribed activity pattern on the boundary.

(a). Graph model

We start with a discretization of the human connectome as a regular lattice contained within the domain Ω. Here, every node in the interior represents a synapse, every node on the boundary ∂Ω is a neuron and every bond of the lattice represents an axon as illustrated in figure 1. The probability for information injected into the network at x to be transferred to y along a path Γ = {x, z1, z2, …, zk−1, y} is that of a simple random walk

p(Γ)=p(x,z1)p(z1,z2)p(zk1,y),with 2.1

and

p(x,y)={16,ifx,yneighbours,0,otherwise. 2.2

Information transmission between neurons within the cortex is hence only accounted for in the case of neighbouring neurons (since random walks terminate once a node on the boundary is reached). All remaining random walks between x and y represent pathways of information injected at a node x within the cortex that is subsequently transmitted to a node y within the cortex via axons within the white matter. This path representation may be used to illustrate the connection between random walks and harmonic functions satisfying Laplace's equation. Here, the main idea is to start from a formulation of the discrete Poisson equation in terms of a one-step shift or averaging operator u → Pu. In the case at hand, the one-step shift operator P merely calculates the average of the field variable u(x) over its six neighbours within the three-dimensional lattice. Under these conditions, consideration of all possible random walks Γ between x and y gives

G(x,y)=12(IP)1=12(δ(xy)+ΓP(x,y)p(Γ)), 2.3

where δ(x) is the discrete Dirac function and P(x,y) denotes the set of all paths joining x to y. This path-sum representation (2.3) shows that the free-space Green's function G(x, y) of the discrete Laplacian is the sum of all contributions p(Γ) arising from paths joining x to y in the lattice. Since p(Γ) represents the probability of a signal starting at x to reach y via the path ΓP(x,y), it follows that G(x, y) is the total information transmission rate at which information injected into the network at x is transferred to y.

Figure 1.

Figure 1.

From left to right: lattice representation of the human connectome, random path contributions to the graph model and activity pattern h(x) as applied in the continuum limit (adapted from [16]). (Online version in colour.)

(b). Continuum limit

Since the size of an individual neuron is much smaller than that of the grey matter, we expect the continuum limit to supply a valid approximation of the preceding graph model. In this continuum counterpart, signals are exchanged between points on the cortex ∂Ω in the form of an activation pattern hH−1/2(∂Ω) as shown in figure 1, which is assumed to satisfy a zero-sum condition (the influx of information has to equal its corresponding outflux) as well as a normalization constraint (more specifically, hL2(Ω)2=Ωh2(x)dS(x)=1). The corresponding transmission rate on the continuum level then results in

R(h)=supuH1(Ω)(Ωh(x)u(x)dS(x)Ω12|u(x)|2dx), 2.4

for which a Lagrangian combining the maximum transmission rate objective (2.4) as well as the zero-sum and normalization constraints may be defined. As shown in [15], maximization of the Lagrange multipliers with respect to the field variables uH1(Ω) and hH−1/2(∂Ω) leads to a reduced functional in terms of the activation potential v(x). The corresponding Euler–Lagrange equations as obtained from local maximization with respect to vH1(Ω) result in the classical Steklov eigenvalue problem [17]

Δv=0,inΩ 2.5a

and

vn=σv,onΩ, 2.5b

where

σ=1vL2(Ω), 2.6

and n is the outward unit normal.

It is well known that, under certain regularity conditions, the Steklov eigenvalue problem has a discrete spectrum (cf. [18]). The inverse of Steklov eigenvalues 1/σn then represent the transmission rates of the network and the corresponding eigenfunctions vn give the transmission modes. Steklov eigenvalues furthermore admit a variational characterization (cf. [1820]), which may be used in order to deduce the beneficial effect of cuts (or sulci) to a maximization of the overall information transmission rate. The Steklov spectrum {σn} of the domain Ω can thus be compared to that of a slit domain Ω′ = Ω\Γ, where Γ is a collection of cuts performed on the boundary ∂Ω as illustrated in figure 2 (note that H1(Ω)⊂H1(Ω′)). Comparing the variational representation of both Steklov spectra of smooth and slit domains shows that eigenvalues of the slit domain σn are consistently less than those of the smooth domain σn. Since the inverse of Steklov eigenvalues 1/σn corresponds to the discrete spectrum of transmission rates that the brain is capable of, the introduction of cuts Γ hence leads to an increase in the overall information transmission rate. We furthermore note that the decrease in the Steklov eigenvalue spectrum depends monotonically on both the number as well as depth of cuts, showing the functional benefit of sulci to brain function.

Figure 2.

Figure 2.

Schematic showing the introduction of a set of cuts Γ (right) within a smooth domain (left). Note that the function space Vn of the slit domain Ω is a subspace of Vn.

3. Numerical experiments

In this section, we elucidate the benefit of sulci to brain function by recourse to numerical experiments involving an actual human brain geometry. We specifically present fully three-dimensional computations of the lowest eigenvalues and eigenmodes of both skull and grey matter geometries by recourse to the finite-element method.

(a). Modelling framework

For ease of computation, we begin by reformulating equation (2.5) as an eigenvalue problem for the Laplace operator. Therein, mass density ρ is concentrated on the boundary ∂Ω and Neumann boundary conditions are applied according to

Δv=σρv,inΩ 3.1

and

vn=0onΩ. 3.2

The eigenvalues of this modified problem are known to coincide with those of the classical Steklov eigenvalue problem [19,21].

Lowest eigenvalues of both grey matter and skull geometries are calculated by recourse to inverse iteration. However, special care needs to be excised when computing the lowest non-zero eigenvalues, since the removal of rigid body modes via fixing nodes alters the eigenvalue spectrum. A simple illustration of this property is obtained by considering a system of two unit masses joined by a spring of stiffness k = 1. Solving the eigenvalue problem arising from balance of linear momentum then gives ωfree=2k/m (omitting the rigid body mode), whereas fixing one node of the system results in ωfixed=k/m. Here, we resort to a shifting inverse iteration procedure in order to find the 10 lowest Steklov eigenvalues. The shifting inverse iteration procedure is validated by computing fundamental Steklov eigenvalues of a unit sphere, for which σ1 = 1 is recovered within an accuracy of 0.23%. Initial guesses of the shifting inverse iteration procedure start with 0 and are increased in increments of 0.25 × 10−3 until the lowest 10 Steklov eigenvalues are found.

We resort to the Colin27 adult brain atlas (v. 2) [22] to extract surface meshes of both the skull and the grey matter. Volume meshes are subsequently constructed using the Siemens PLM Software Femap with NX Nastran, which results in meshes comprising 104 269 and 668 155 linear tetrahedra for the skull- and grey matter geometries, respectively. As a closer approximation of the convex hull of the grey matter, a third volume mesh is generated by scaling the skull geometry by factors of 0.79 in x, 0.85 in y and 0.84 in z corresponding to grey matter dimensions. For comparison purposes, we furthermore construct a spheroidal domain of radius 6.7 cm, which coincides with the average male human brain volume of 1260 cm3 [1]. Membrane elements of uniform areal mass density are employed in order to model mass concentration on the boundary, and interior degrees of freedom are subsequently eliminated using static condensation.

(b). Results

As can be seen in table 1, the fundamental eigenvalue of the Steklov spectrum drops by 40.71% in going from the skull to the grey matter geometry. A further investigation of the inherent size effect (the skull represents an enlarged convex hull approximation of the grey matter) shows that an additional decrease in the lowest eigenvalue by 13.58% is obtained for the scaled skull geometry. We note that decreasing eigenvalues for increasing domain size (and vice versa) are expected analytically from solutions to the Steklov, e.g. eigenvalues σi = i/r of spheroidal domains of radius r [18]. For comparison purposes, table 1 further lists the fundamental Steklov eigenvalue of a spheroidal domain of volume 1260 cm3, which matches the average male human brain size. An increase in σ1 by 97.20% can be noted in going from the grey matter geometry to a spheroidal domain. Computations hence recover the analytical result of an increase in transmission rates via the operation of cutting.

Table 1.

Fundamental Steklov eigenvalue for the grey matter and its convex hull approximation (scaled and non-scaled versions) in comparison to a spheroidal domain.

grey matter skull (not scaled) skull (scaled) spheroidal domain
σ1 0.00415 0.00700 0.00810 0.148442

Figure 3 shows the eigenfunctions corresponding to the lowest five non-zero eigenvalues for the grey matter geometry. Highest concentrations in activation potential v(x) originating from the centre can be observed for a value of i = 4, whereas a medium increase in v(x) located in the frontal lobe is obtained for i = 2. The remaining eigenfunctions are characterized by either an uniform activation potential (for i = 1 and i = 3) or a checkerboard pattern (for i = 5). We furthermore note that the gradient of the activation potential gives the direction of the local flow of information, which in turn defines the preferred modes of transmission.

Figure 3.

Figure 3.

Eigenmodes of the grey matter corresponding to the lowest five Steklov eigenvalues. Colour coding corresponds to level contours of the activation potential v(x). (Online version in colour.)

The 10 lowest Steklov eigenvalues (including the rigid body mode) of the grey matter as well as the scaled skull geometry are plotted in figure 4. It bears mentioning that figures 3 and 4 omit multiplicities of eigenvalues for visualization purposes. As can be seen in figure 4, all eigenvalues of the grey matter decrease in comparison to corresponding eigenvalues of the scaled skull geometry. Furthermore, an eigenvalue span of 0.01507 is obtained for the grey matter. We note that the discreteness of the Steklov spectrum implies that optimal transmission modes of information are quantized.

Figure 4.

Figure 4.

Lowest 10 Steklov eigenvalues σi (including the rigid mode mode) of the grey matter and the scaled skull geometry as computed from a shifting inverse iteration procedure. (Online version in colour.)

From the computation of Steklov eigenfunctions vi, we can furthermore use the Biot–Savart Law

Bi(r)=μ04πΩvi(r)×(rr)|rr|3dΩ, 3.3

in order to calculate the induced magnetic field on the grey matter surface. Here, μ0 is the magnetic constant and integration is performed over the entire volume Ω of the brain. Figure 5 shows the magnetic field induced by the second eigenfunction v2 using a normalization of μ0 = 1. The magnetic field contains vortex patterns within the right parietal lobe, as highlighted by the magnified top view. We recall that right parietal lobe activity is linked to functions such as integrating sensory information from different body parts as well as the comprehension of numbers and their relation to each other [23].

Figure 5.

Figure 5.

Magnetic field as induced by the second eigenfunction v2 in three different views: transverse (left), frontal (middle) and sagittal (right). Arrows depict the orientation of the magnetic field on the cortical surface, the magnified top view highlights vortex formation within the right parietal lobe. (Online version in colour.)

4. Conclusion and discussion

In this work, we have shown that the cerebral cortex of an adult human brain geometry enhances functional optimality. Based on an analytical model that relates transmission of information along lattice paths to the classical Steklov eigenvalue problem in the continuum limit, inverse Steklov eigenvalues are identified as information transmission rates. As suggested by analytical results comparing the Steklov spectra of slit domains, numerical experiments based on the finite-element method reveal a decrease in Steklov eigenvalues for gyrified brain domains compared to those of a smooth convex hull approximation of the grey matter. It bears mentioning that the decrease in the Steklov eigenvalues depends monotonically on both the number as well as the depth of cuts.

It is worth noting that the presented working hypothesis of the functional optimality of the sulcus pattern is proposed as a dominant (albeit not the only) evolutionary driver. A more realistic picture to be elucidated in future studies calls for the inclusion of other influencing factors, such as the objective of maximizing the number of modules [24]. Furthermore, an important implication of the random walk representation as presented in §3a is the assumption of isotropy with regard to the underlying network along which information is transmitted. Hence, further development of the presented model to account for experimentally observed anisotropies of network topologies within the brain constitutes a worthwhile extension. To further examine gyrification as a shape optimization problem, future studies including a penalization of the cortical area are needed. Here, numerical tools such as gradient flow or boundary element methods could be employed to optimize brain shape based on the objective of maximizing the overall data transmission rate. A suitable objective function introducing a penalty term on the cortical surface area is

F(Ω)=σ1(Ω)+A|Ω|α, 4.1

where A is a material constant, α is a the scaling exponent and |∂Ω| is the surface area. Closely related shape optimization problems and the corresponding necessary and sufficient conditions for the existence of minimizers may be found in [25], whereby constant area constraints are employed. The introduction of a cost associated with cerebral surface area is suggested by observed scaling relations of cortical folding, such as the superlinear power law found in noncetacean gyrencephalic species [8]. Based on this superlinear scaling, scaling relations of the form

|Ω|=max{|K|,B|K|β}, 4.2

where |∂K| denotes the total exposed cortical area, encode the observation of increasingly folded brains for an increasing surface area. Powers of such scaling laws may be related to optimal scaling parameters as first developed for energy-minimizing branched microstructures [2629]. In general, optimal scaling laws take the form

c|K|ϵ|Ω|δ1σ1(Ω)C|K|ϵ|Ω|δ, 4.3

with scaling exponents ϵ and δ and constants C > c > 0. As shown in [15], dimensional analysis and the assumption of tight bounds gives the scaling exponent β in terms of the cost exponent α as well as optimal-scaling exponents ϵ and δ as

β=ϵαδ. 4.4

In addition to the calculation of eigenvalues, eigenmodes corresponding to the lowest five Steklov eigenvalues have been computed with a view to elucidating characteristic concentrations in activation potential. These results suggest a correlation between characteristic concentrations in activation potential and specific brain functions as visualized using, e.g. functional magnetic resonance imaging (fMRI). Future studies in this area could thus shed light into the functional optimality of individual brain functions, whereby brain functions matching lower eigenmodes correspond to higher information transmission rates. Such optimal modes of transmission could play a central role in specific processing scenarios such as subconscious decision-making, which was shown to precede the arrival of a conscious mind [30]. Unconscious thought theory postulates that unconscious thought outperforms conscious thought in complex tasks involving many variables [31], which again suggests a connection to functional optimality.

From the computation of Steklov eigenfunctions, we have furthermore shown how the resulting induced magnetic field may be calculated using the Biot–Savart law in support of clinical applications, e.g. transcranial magnetic stimulation (TMS). TMS has been studied as a promising tool treating neurological diseases such as major depressive disorder [32], epilepsy [33] or motor disability after stroke [34]. In this context, patient-specific meshes could be generated from neuroanatomical scans, from which optimal modes of information transmission and their corresponding induced magnetic fields may be computed. With the induced magnetic field pattern at hand, the potential clinical benefit of exciting the fundamental modes of transmission of the brain via distributed solenoids placed along the cortical surface suggests itself as a worthwhile focus of further study.

Data accessibility

This article has no additional data.

Author's contributions

Both authors conceived of and designed this study and drafted the manuscript. Both authors gave final approval for publication.

Competing interests

We declare we have no competing interests.

Funding

S.H. gratefully acknowledges support from the Alexander von Humboldt Stiftung through a Research Fellowship for Postdoctoral Researchers.

References

  • 1.Cosgrove KP, Mazure CM, Staley JK. 2007. Evolving knowledge of sex differences in brain structure, function, and chemistry. Biol. Psychiatry 62, 847–855. ( 10.1016/j.biopsych.2007.03.001) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Pakkenberg B, Grundersen H. 1997. Neocortical neuron number in humans: effect of sex and age. J. Comp. Neurol. 384, 312–320. ( 10.1002/(SICI)1096-9861(19970728)384:2%3C312::AID-CNE10%3E3.0.CO;2-K) [DOI] [PubMed] [Google Scholar]
  • 3.Purves D, Augustine GJ, Fitzpatrick D, Katz LC, LaMantia A-S, McNamara JO, Mark Williams S. 2001. Neuroscience, 2nd edn Sunderland, MA: Sinauer Associates. [Google Scholar]
  • 4.Hormuzdi SG, Filippov M, Mitropoulou G, Monyer H, Bruzzone R. 2004. Electrical synapses: a dynamic signaling system that shapes the activity of neural networks. Biochim. Biophys. Acta 1662, 113–137. ( 10.1016/j.bbamem.2003.10.023) [DOI] [PubMed] [Google Scholar]
  • 5.Brun C. et al. 2009. Mapping the regional influence of genetics on brain structure variability—a tensor-based morphometry study. Neuroimage 48, 37–49. ( 10.1016/j.neuroimage.2009.05.022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Mueller S, Wang D, Fox MD, Yeo BT, Sepulcre J, Sabuncu MR, Shafee R, Lu J, Liu H. 2013. Individual variability in functional connectivity architecture of the human brain. Neuron 77, 586–595. ( 10.1016/j.neuron.2012.12.028) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Park HJ, Friston K. 2013. Structural and functional brain networks: from connections to cognition. Science 342, 1238411 ( 10.1126/science.1238411) [DOI] [PubMed] [Google Scholar]
  • 8.Mota B, Herculano-Houzel S. 2017. Cortical folding scales universally with surface area and thickness, not number of neurons. Sci. Res. Rep. 349, 74–77. ( 10.1126/science.aaa9101) [DOI] [PubMed] [Google Scholar]
  • 9.Rogers J. et al. 2010. On the genetic architecture of cortical folding and brain volume in primates. Neuroimage 53, 1103–1108. ( 10.1016/j.neuroimage.2010.02.020) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Zilles K, Armstrong E, Schleicher A, Kretschmann H. 1988. The human pattern of gyrification in the cerebral cortex. Anat. Embryol. 179, 173–179. ( 10.1007/BF00304699) [DOI] [PubMed] [Google Scholar]
  • 11.Zilles K, Palomero-Gallagher N, Amunts K. 2013. Development of cortical folding during evolution and ontogeny. Trends Neurosci. 36, 275–284. ( 10.1016/j.tins.2013.01.006) [DOI] [PubMed] [Google Scholar]
  • 12.Hill J, Dierker D, Neil J, Inder T, Knutsen A, Harwell J, Coalson T, Van Essen D. 2010. A surface-based analysis of hemispheric asymmetries and folding of cerebral cortex in term-born human infants. J. Neurosci. 30, 2268–2276. ( 10.1523/JNEUROSCI.4682-09.2010) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Richman DP, Steward RM, Hutchinson JW, Caviness VS. 1975. Mechnical model of brain convolutional development. Science 189, 18–21. ( 10.1126/science.1135626) [DOI] [PubMed] [Google Scholar]
  • 14.Garcia KE. et al. 2018. Dynamic patterns of cortical expansion during folding of the preterm human brain. Proc. Natl Acad. Sci. USA 115, 3156–3161. ( 10.1073/pnas.1715451115) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Heyden S, Ortiz M. 2018. Functional optimality of the sulcus pattern of the human brain. Math. Med. Biol. J. IMA dqy007 ( 10.1093/imammb/dqy007) [DOI] [PubMed] [Google Scholar]
  • 16.Wikimedia Commons. 2006. Head anatomy lateral view with skull. https://anatomyofdiagram.com/the-human-skull-head-anatomy/the-human-skull-head-anatomy-filelateral-head-skull-wikimedia-commons.
  • 17.Steklov MW. 1902. Sur les problèmes fondamentaux de la physique mathématique. Ann. Sci. Ecole Norm. Sup. 19, 455–490. ( 10.24033/asens.516) [DOI] [Google Scholar]
  • 18.Girouard A, Polterovich I. 2017. Spectral geometry of the Steklov problem (survey article). J. Spectral Theory 7, 321–359. ( 10.4171/JST/164) [DOI] [Google Scholar]
  • 19.Lamberti PD, Provenzano L. 2013. Viewing the Steklov eigenvalues of the Laplace operator as critical Neumann eigenvalues. In Current trends in analysis and its applications, pp. 171–178. Basel, Switzerland: Birkhäuser.
  • 20.Bogosel B, Bucur D, Giacomini A. 2017. Optimal shapes maximizing the Steklov eigenvalues. SIAM J. Math. Anal. 49, 1645–1680. ( 10.1137/16M1075260) [DOI] [Google Scholar]
  • 21.Arrieta JM, Jiminez-Casas A, Rodriguez-Bernal A. 2008. Flux terms and Robin boundary conditions as limit of reactions and potentials concentrating in the boundary. Rev. Mat. Iberoam. 24, 183–211. ( 10.4171/RMI/533) [DOI] [Google Scholar]
  • 22.Fang Q. 2010. Mesh-based Monte Carlo method using fast ray-tracing in Plucker coordinates. Biomed. Opt. Express 1, 165–175. ( 10.1364/BOE.1.000165) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Blakemore SJ, Frith U. 2005. The learning brain. Oxford, UK: Blackwell Publishing. [Google Scholar]
  • 24.Jerison H. 1973. Evolution of the brain and intelligence. New York, NY: Academic Press. [Google Scholar]
  • 25.Bucur D, Buttazzo G. 2005. Variational methods in shape optimization problems. In Progress in nonlinear differential equations and their applications. Boston, MA: Birkhauser.
  • 26.Kohn RV, Müller S. 1992. Branching of twins near an austenite-twinned-martensite interface. Phil. Mag. A 66, 697–715. ( 10.1080/01418619208201585) [DOI] [Google Scholar]
  • 27.Kohn RV, Müller S. 1994. Surface energy and microstructure in coherent phase transitions. Comm. Pure Appl. Math. 47, 405–435. ( 10.1002/cpa.3160470402) [DOI] [Google Scholar]
  • 28.Choksi R, Kohn RV, Otto F. 1999. Domain branching in uniaxial ferromagnets: a scaling law for the minimum energy. Comm. Math. Phys. 201, 61–79. ( 10.1007/s002200050549) [DOI] [Google Scholar]
  • 29.Conti S. 2000. Branched microstructures: scaling and asymptotic self-similarity. Comm. Pure Appl. Math. 53, 1448–1474. ( 10.1002/1097-0312(200011)53:11%3C1448::AID-CPA6%3E3.0.CO;2-C) [DOI] [Google Scholar]
  • 30.Bargh J, Morsella E. 2008. The unconscious mind. Perspect. Psychol. Sci. 3, 73–79. ( 10.1111/j.1745-6916.2008.00064.x) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Dijksterhuis A. 2004. Think different: the merits of unconscious thought in preference development and decision making. J. Pers. Soc. Psychol. 87, 586–598. ( 10.1037/0022-3514.87.5.586) [DOI] [PubMed] [Google Scholar]
  • 32.Bersani FS. 2013. Deep transcranial magnetic stimulation as a treatment for psychiatric disorders: a comprehensive review. Eur. Psychiatry 28, 30–39. ( 10.1016/j.eurpsy.2012.02.006) [DOI] [PubMed] [Google Scholar]
  • 33.Santos Pereira L, Teixeira Müller V, da Mota Gomes M, Rotenberg A, Fregni F. 2016. Safety of repetitive transcranial magnetic stimulation in patients with epilepsy: a systematic review. Epilepsy Behav. 57, 167–176. ( 10.1016/j.yebeh.2016.01.015) [DOI] [PubMed] [Google Scholar]
  • 34.Corti M, Patten C, Triggs W. 2012. Repetitive transcranial magnetic stimulation of motor cortex after stroke: a focused review. Amer. J. Phys. Med. Rehabil. 91, 254–270. ( 10.1097/PHM.0b013e318228bf0c) [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

This article has no additional data.


Articles from Proceedings. Mathematical, Physical, and Engineering Sciences are provided here courtesy of The Royal Society

RESOURCES