Abstract
What information single neurons receive about general neural circuit activity is a fundamental question for neuroscience. Somatic membrane potential (Vm) fluctuations are driven by the convergence of synaptic inputs from a diverse cross-section of upstream neurons. Furthermore, neural activity is often scale-free, implying that some measurements should be the same, whether taken at large or small scales. Together, convergence and scale-freeness support the hypothesis that single Vm recordings carry useful information about high-dimensional cortical activity. Conveniently, the theory of “critical branching networks” (one purported explanation for scale-freeness) provides testable predictions about scale-free measurements that are readily applied to Vm fluctuations. To investigate, we obtained whole-cell current-clamp recordings of pyramidal neurons in visual cortex of turtles with unknown genders. We isolated fluctuations in Vm below the firing threshold and analyzed them by adapting the definition of “neuronal avalanches” (i.e., spurts of population spiking). The Vm fluctuations which we analyzed were scale-free and consistent with critical branching. These findings recapitulated results from large-scale cortical population data obtained separately in complementary experiments using microelectrode arrays described previously (Shew et al., 2015). Simultaneously recorded single-unit local field potential did not provide a good match, demonstrating the specific utility of Vm. Modeling shows that estimation of dynamical network properties from neuronal inputs is most accurate when networks are structured as critical branching networks. In conclusion, these findings extend evidence of critical phenomena while also establishing subthreshold pyramidal neuron Vm fluctuations as an informative gauge of high-dimensional cortical population activity.
SIGNIFICANCE STATEMENT The relationship between membrane potential (Vm) dynamics of single neurons and population dynamics is indispensable to understanding cortical circuits. Just as important to the biophysics of computation are emergent properties such as scale-freeness, where critical branching networks offer insight. This report makes progress on both fronts by comparing statistics from single-neuron whole-cell recordings with population statistics obtained with microelectrode arrays. Not only are fluctuations of somatic Vm scale-free, they match fluctuations of population activity. Thus, our results demonstrate appropriation of the brain's own subsampling method (convergence of synaptic inputs) while extending the range of fundamental evidence for critical phenomena in neural systems from the previously observed mesoscale (fMRI, LFP, population spiking) to the microscale, namely, Vm fluctuations.
Keywords: balanced networks, membrane potential, neural computation, neuronal avalanches, renormalization group, scale-free
Introduction
How do cortical population dynamics impact single neurons? What can we learn about cortical population dynamics from single neurons? These questions are central to neuroscience. Uncovering the functional significance of multiscale organization within cerebral cortex requires knowing the relationship between the dynamics of networks and individual neurons within them (Nunez et al., 2013).
For pyramidal neurons in the visual cortex, somatic spike generation is ambiguously related to presynaptic firing (Tsodyks and Markram, 1997; Brunel et al., 2014; Gatys et al., 2015; Stuart and Spruston, 2015; Moore et al., 2017). Such neurons pass spiking information to many postsynaptic neurons (Lee et al., 2016). However, a presynaptic pool with multifarious neighboring and distant neurons (Hellwig, 2000; Wertz et al., 2015) provides excitatory and inhibitory synaptic inputs throughout the soma and complex dendritic architecture (Magee, 2000; Larkum et al., 2008; Moore et al., 2017). Input propagation to the axon hillock has both active and passive features (London and Häusser, 2005) and the membrane potential (Vm) response is increasingly nonlinear near the action potential threshold. Thus, such details of network propagation give Vm more utility than focusing solely on spiking.
Most computational neuroscientists use spiking data because spikes are “the currency of the brain” (Wolfe et al., 2010) and extracellular recording is straightforward compared with whole-cell recording. Yet, the paucity of single-neuron spiking (Shoham et al., 2006) and limited foreknowledge about connections (Helmstaedter, 2013) make extracellular single-unit observation an impoverished means of studying neuronal circuits. In contrast, subthreshold Vm fluctuations contain rich information about the circuits containing each neuron (Sachidhanandam et al., 2013; Petersen, 2017). Integral to gaining a neuron's view of the brain is uncovering relationships between the statistics of Vm fluctuations and fluctuations of local spiking and then contrasting against other plausible one-dimensional signals.
We look for such relationships in the strict predictions and rigorous measurements of scale-freeness used to identify a fragile network connectivity pattern known as “critical branching.” This pattern exhibits emergent properties valuable for information processing, such as higher susceptibility and dynamic range (Haldeman and Beggs, 2005; Beggs, 2008; Shew and Plenz, 2013; Shriki and Yellin, 2016; Timme et al., 2016), but omits some neuronal dynamics (Poil et al., 2008, 2012) without extension (Porta and Copelli, 2018). The pattern is as follows: on average over all neuronal avalanches (spiking above baseline; Friedman et al., 2012), one spike leads to exactly one other spike. In most arbitrary networks there is less or more than one; these are “subcritical” and “supercritical” respectively. Among the dazzling emergent properties of “criticality” are universality, self-similarity, and scale-free correlations (Stanley, 1999).
These are as follows: A “universality class” is a set of incongruous systems exhibiting identical statistics only at their “critical points.” “Self-similarity” includes fractal patterns and power laws in geometrical analysis of avalanches (power laws are “scale-invariant,” popularly called “scale-free”). Avalanches of any duration have identical average shapes after normalization (Shaukat and Thivierge, 2016). Avalanche areas grow with duration as another power law (Sethna et al., 2001). However, observation methods must be consistent with event propagation (Priesemann et al., 2009; Yu et al., 2014; Levina and Priesemann, 2017). Additionally, pairwise correlation versus length or time are also power laws (Chialvo, 2010), meaning any input has a nonzero chance of propagating forever or anywhere.
In summary, the theory of critical branching networks offers superb standards of comparison for three reasons: neuronal avalanche analysis applies to Vm, offers promising insights, and makes precise predictions about fluctuation geometry. We study both Vm fluctuations and criticality with one simple question: Do Vm fluctuations match the scale-free statistics of cortical populations (see Fig. 1)?
To address this question, we simultaneously recorded somatic Vm from pyramidal neurons and local field potential (LFP) in visual cortex and performed avalanche analysis on fluctuations. We found that subthreshold Vm fluctuation statistics match published microelectrode array (MEA) data. We used surrogate testing to show why negative LFP fluctuations don't match and modeling to demonstrate dependence on critical branching.
Materials and Methods
Surgery and visual cortex
All procedures were approved by Washington University's Institutional Animal Care and Use Committees and conform to the National Institutes of Health's Guide for the Care and Use of Laboratory Animals. Fourteen adult red-eared sliders (Trachemys scripta elegans, 150–1000 g) were used for this study; their genders were not recorded. Turtles were anesthetized with propofol (2 mg of propofol/kg) then decapitated. Dissection proceeded as described previously (Saha et al., 2011; Crockett et al., 2015; Wright et al., 2017a).
To summarize, immediately after decapitation, the brain was excised from the skull with the right eye intact and bathed in cold extracellular saline containing the following (in mm): 85 NaCl, 2 KCl, 2 MgCl2*6H2O, 20 dextrose, 3 CaCl2-2H2O, 45 NaHCO3. The dura was removed from the left cortex and right optic nerve and the right eye hemisected to expose the retina. The rostral tip of the olfactory bulb was removed, exposing the ventricle that spans the olfactory bulb and cortex. A cut was made along the midline from the rostral end of the remaining olfactory bulb to the caudal end of the cortex. The preparation was then transferred to a perfusion chamber (Warner RC-27LD recording chamber mounted to PM-7D platform) and placed directly on a glass coverslip surrounded by Sylgard. A final cut was made to the cortex (orthogonal to the previous and stopping short of the border between medial and lateral cortex), allowing the cortex to be pinned flat with the ventricular surface exposed. Multiple perfusion lines delivered extracellular saline to the brain and retina in the recording chamber (adjusted to pH 7.4 at room temperature).
We used a phenomenological approach to identify the visual cortex, described previously (Shew et al., 2015). In brief, this region was centered on the anterior lateral cortex, in agreement with voltage-sensitive dye studies (Senseman and Robbins, 1999, 2002). Anatomical studies identify this as a region of cortex receiving projections from lateral geniculate nucleus (Mulligan and Ulinski, 1990). We further identified a region of neurons as belonging to the visual cortex when the average LFP response to visual stimulation crossed a given threshold and patched within that neighborhood (radius of ∼300 μm).
Intracellular recordings
For whole-cell current-clamp recordings, patch pipettes (4–8 MΩ) were pulled from borosilicate glass and filled with a standard electrode solution containing the following (in mm): 124 KMeSO4, 2.3 CaCl2-2H2O, 1.2 MgCl2, 10 HEPES, and 5 EGTA adjusted to pH 7.4 at room temperature. Cells were targeted for patching using a differential interference contrast microscope (Olympus). Vm recordings were collected using an Axoclamp 900A amplifier, digitized by a data acquisition panel (National Instruments PCIe-6321), and recorded using a custom LabVIEW program (National Instruments), sampling at 10 kHz. As described in (Crockett et al., 2015; Wright and Wessel, 2017; Wright et al., 2017a,b), before recording from a cell after initial patching current was injected to elicit spiking. This current injection test was also repeated intermittently between recording trials. Recording did not proceed if a cell spiked inconsistently (e.g., failure to spike, insufficient spike amplitude) in response to injected current or exhibited extreme depolarization in response to small current injection amplitudes. If a clog or loss of seal was suggested by unusually erratic Vm short timescales current, the current injection test was performed and, upon failure, the affected recording was marked for exclusion from analysis. We excluded cells that did not display stable resting Vms for long enough to gather enough avalanches. Up to three whole-cell recordings were made simultaneously. In total, we obtained recordings from 51 neurons from 14 turtles.
Recorded Vm fluctuations taken in the dark (no visual stimulation) were interpreted as ongoing activity. Such ongoing cortical activity was interrupted by visual stimulation of the retina with whole-field flashes and naturalistic movies as described previously (Wright and Wessel, 2017; Wright et al., 2017a,b). An uninterrupted recording of ongoing activity lasted for 2–5 min. Periods of visual stimulation were too short and were too frequently interrupted by action potentials to yield the great number of avalanches required for rigorous power-law fitting.
A sine-wave removal algorithm was used to remove 60 Hz line noise. Action potentials in turtle cortical pyramidal neurons are relatively rare. An algorithm was used to detect spikes and the Vm recordings between spikes were extracted and filtered from 0 to 100 Hz. Vm recordings were detrended by subtracting the fifth percentile in a sliding 2 s window. The resulting signal was then shifted to have the same mean value as before subtraction. Detrending did not affect the size of Vm fluctuations (data not shown).
Extracellular recordings
Extracellular recordings were achieved with tungsten microelectrodes (microprobes heat-treated tapered tip) with ∼0.5 MΩ impedance. Electrodes were slowly advanced through tissue under visual guidance using a manipulator (Narishige) while monitoring for activity using custom acquisition software (National Instruments). The extracellular recording electrode was located within ∼300 μm of patched neurons. Extracellular activity was collected using an AM Systems Model 1800 amplifier, band-pass filtered between 1 Hz and 20,000 Hz, digitized (NI PCIe-6231), and processed using custom software (National Instruments). Extracellular recordings were downsampled to 10,000 Hz and then filtered (100 Hz low-pass), yielding the LFP. The LFP was filtered and detrended as described above (see “Intracellular recordings” section) except that the mean of the entire signal was subtracted and the signal was multiplied by −1 before it was detrended. This final inverted signal is commonly featured in the literature as negative LFP or nLFP (Kelly et al., 2010; Kajikawa and Schroeder, 2011; Okun et al., 2015; Ness et al., 2016).
Experimental design and statistical analysis
Setwise comparisons
To measure differences between sets of statistics, we relied on three nonparametric measures. We used the MATLAB Statistics and Machine Learning Toolbox implementation of Fisher's exact test (Hammond et al., 2015). This allowed us to measure the effect size (odds ratio rOR) and statistical significance (p-value) of finding that consistency-with-criticality is more frequent or less frequent in an experimental group than a control group.
To quantify the similarity between the exponents measured in different sets of data, we used the MATLAB Statistics and Machine Learning Toolbox implementations of the exact Wilcoxon rank-sum test (Hammond et al., 2015) and the exact Wilcoxon signed-rank test. In both cases effect size, rSDF is measured by the simple difference formula (Kerby, 2014). The rank-sum test is used when comparing nonsimultaneous recordings, such as comparing MEA data with Vm data. The signed-rank test is used when comparing data that can be paired, such as Vm data to concurrent LFP. When comparing whether a dataset differs from a specific value, we can use the sign test.
The significance level was set at p = 0.05 for all tests. Each setwise comparison test stands alone as its own conclusion. They were not combined to assess the significance of any effect across sets-of-sets. Thus, we are not making multiple comparisons and no corrections are warranted (Bender and Lange, 2001).
Random surrogate testing
It is possible that scale-free observations have an origin in independent random processes of a type demonstrated previously (Touboul and Destexhe, 2017). To control for this, we phase-shuffled the Vm fluctuations using the amplitude-adjusted Fourier transform (AAFT) algorithm (Theiler et al., 1992). This tests against the null hypothesis that a measure on a time series can be reproduced by performing a nonlinear rescaling of a linear Gaussian process with the same autocorrelation (same Fourier amplitudes) as the original process. Phase information is randomized, which removes higher-order correlations but preserves the scale-free power-spectrum.
The AAFT tests only higher-order correlations, but a simpler algorithm tests against the null hypothesis that an un-rescaled linear Gaussian process with the same autocorrelation as the original process can produce the same results (Theiler et al., 1992). This is known as the unwindowed Fourier transform (UFT). Once we see what measures depend on the higher-order correlations with the AAFT, we can use the UFT to see how measures depend on the non-Gaussianity (nonlinear rescaling), which is inherent to excitable membranes. Using the UFT alone would make it difficult to attribute whether statistically significant differences are due to the rescaling or to the higher-order correlations (Rapp et al., 1994).
We performed AAFT and UFT on each Vm time series once and then compared how the two datasets performed on every metric used in this study. The datasets were compared with a matched Wilcoxon sign-rank test implemented via MATLAB's statistics tool box. Doing the comparison at a dataset level allowed us to obtain a discrimination statistic for every metric that we used without repeating the computationally expensive analysis procedure hundreds or thousands of times on every Vm trace. With enough individual recordings in each dataset, the matched Wilcoxon sign-rank test is a reliable measure that empowered us to efficiently compare all important metrics.
Neuronal avalanche analysis
Neuronal avalanches were defined by methods analogous to those described previously (Poil et al., 2012), which are used for uninterrupted ongoing signals; conversely, methods based on event detection (Beggs and Plenz, 2003) require periods of nonactivity. A threshold is defined and an avalanche starts when the signal crosses the threshold from below and ends when the signal crosses the threshold from above. The choice of threshold is a free parameter and we set it to the 25th percentile before conducting the complete analysis. In similar situations (continuous nonzero signals), researchers chose one-half the median (Poil et al., 2012; Larremore et al., 2014). However, one-half the median cannot work for negative signals or signals with high mean but low variance. Before analysis, threshold choices between the 15th to 50th percentile were tested on data from the five cells with the most recordings to see how threshold may affect the number of avalanches. The 25th percentile was consistent with the existing literature and gave many avalanches compared with alternatives. Having a large number of avalanches is important because it gives the best statistical resolution. An analysis with a choice of threshold that yields fewer avalanches (or changing the threshold for each recording) would be suspect for selecting serendipitous results. After the analysis was conducted, eight percentiles between the 15th to 50th percentile were tested and gave similar power-law exponents.
We quantified each neuronal avalanche by its size A and its duration D. The avalanche size is the area between the processed Vm recording and the baseline. The baseline is another free parameter that was set at the second percentile of the processed Vm recording. The second percentile was chosen because its value is more stable than the absolute minimum. The avalanche duration D is the time between threshold crossings.
The lower limit of avalanche duration is defined by the membrane time constant, which has been reported to be between 50 and 140 ms for the turtle brain at room temperature (Ulinski, 1990; Larkum et al., 2008). We took a conservative approach by setting the limit at less than half the lower bound on membrane time constant, which was significantly less than the lower cutoff from power-law fits. Only avalanches with a duration longer than 20 ms were included in the analysis. Thus, we avoided artificially retaining only the events most likely to be power-law distributed.
Following the procedure described above, each processed Vm recording of uninterrupted ongoing activity (i.e., a recording of 2–5 min duration) yielded 327 ± 148 (mean ± SD) avalanches. This is insufficient for rigorous statistical fitting on recordings individually (Clauset et al., 2009). Therefore, we grouped avalanches from multiple recordings of ongoing activity of the same cells. Each cell produced between 3 and 19 recordings of ongoing activity (2–5 min duration each recording), with trials recorded intermittently over a period of 10–60 min. We grouped recordings based on whether they occurred in the first or second 20 min period since the beginning of recording from that neuron. Then, all the avalanches from the first or second 20 min period were grouped together with one data object (the group) storing the size and duration of each avalanche. It is rare for neurons to have recordings in the third 20 min periods, so these data were not included. Since there was a slow drift in the mean Vm over a period of several minutes, we scaled the avalanche sizes from each recording to have the same median as other recordings from the same group. Z-scoring was not useful for accounting for trial to trial variability because it does affect whether a specific time point is above or below a certain percentile threshold. Therefore, it is not useful for removing variability in avalanche duration. Windowed z-scoring introduces artifacts near action potentials. On average, four recordings were possible in each 20 min period. There were 51 neurons with multiple recordings of ongoing activity in the first 20 min of experimentation (thus 51 recording groups); of these, 18 neurons had an additional 20 min period with more than one recording. This produced a total of 69 groups with 1346 ± 1018 (mean ± SD) avalanches for each group. Of these 69 groups, 57% had >1000 avalanches. The largest number of avalanches was 7495 and the smallest was 313. Only five groups had <500 avalanches. We report on the 51 groups from the first 20 min period separately from the 18 groups with recordings from the second 20 min period of experimentation.
For each group, we evaluated the avalanche size and duration distributions with respect to power laws. To test whether a distribution followed a power law, we applied the rigorous statistical fitting routine described previously (Clauset et al., 2009). We tested three power-law forms: P(x) ∝ x−α (with and without truncation) (Deluca and Corral, 2013), as well as a power law with exponential cutoff, P(x) ∝ x−αe−x/r. We compared these against log normal and exponential alternative (non-power-law) hypotheses. Distribution parameters were estimated using maximum likelihood estimation (MLE) and the best model out of those fitted to the data was chosen using the Akaike information criterion (Bozdogan, 1987). It should be acknowledged that a small power-law region in the truncated form would be suspect for false positives, likewise for a strong exponential cutoff (Deluca and Corral, 2013). Finally, to decide whether a fitted model was plausible, pseudorandom datasets were drawn from a distribution with the estimated parameters and then the fraction that had a lower fit quality (Kolmogorov–Smirnov distance) than the experimental data was calculated. If this fraction, called the comparison quotient q, was >0.10, then the best-fit model (according to the Akaike information criterion) was accepted as the best candidate. Otherwise, the next best model was considered.
We applied several additional steps and strict criteria to control for false positives. One such step was assessing whether the scaling relation was obeyed over the whole avalanche distribution for each group (not just the portion above the apparent onset of power-law behavior). The scaling relation is another power-law, 〈A〉(D) ∝ Dγ, predicting how the measured size of avalanches increases geometrically with increasing duration (on average). For any dataset with three power laws, 〈A〉(D) ∝ Dγ (scaling relation), P(A) ∝ A−τ (size distribution), and P(D) ∝ D−β (duration distribution), the scaling relation exponent is predicted by the other two exponents by γ ≈ γp = (Scarpetta et al., 2018). Note that γp = 1 is a trivial value because it implies 〈A〉(D) ∝ D and that would suggest that individual avalanches were just noise symmetric about a constant value. This would mean that the average avalanche shape is just a flat line at some constant of proportionality, = a, where is a function describing the shape of an avalanche of duration D, t0 is the beginning of the avalanche, and a is a constant.
Standards for consistency with critical point behavior.
We applied four standardized criteria to provide a transparent and systematic way to produce a binary classification; either “no inconsistencies with activity near a critical point were detected” or “some inconsistencies with activity near a critical point were detected.”
First, a collection of avalanches must be power-law distributed in both its size and duration distributions.
Second, the collection of avalanches must have a power-law scaling relation as determined by R2 > 0.95 (coefficient of determination) for linear least-squares regression to a log-log plot of average size vs durations: log (A(D)) ∼ γ log (D) + b. This R2 represents the best that any linear fit can achieve and must include all the avalanches, not a subset. We denote the scaling exponent (slope from linear regression) from this fit as γf.
Third, the scaling relation exponent predicted by theory (denoted as γp) must correspond to a trendline on a log-log scatter plot of 〈A〉(D) with an R2 that is within 90% of the best-case fitted trendline from the second criterion. Again, the R2 for the predicted scaling relation is calculated across all avalanches, not just the subset above the inferred lower cutoff of power-law behavior (which was found for the first criterion). This cross-validates agreement with theory.
Fourth, the fitted scaling relation exponent must be significantly greater than 1: (γf − 1) > σγf where σγf is the SD. This last requirement eliminates scaling that might be trivial in origin. It is measured after getting the fitted scaling relation exponent for all of the data so that a dataset SD can be determined. It is necessary to also check that the set of scaling relation exponents from the power-law fits to all avalanche sets is significantly different from 1 at a dataset level. A scaling relation exponent equal to one suggests a linear relationship between mean size and duration that is not consistent with criticality in neural systems (Haldeman and Beggs, 2005).
Our four-criterion test cannot measure distance from a critical point nor eliminate all risk of false positives. To complete our analysis, we also looked at three additional factors: (1) whether exponent values match exponent values from other experiments as expected from the universality prediction of theory, (2) whether all the exponents within our dataset have similar scaling relation predictions, and (3) whether the avalanches within our dataset exhibit shape collapse across all the recordings.
Obtaining shape collapse and analyzing it quantitatively and qualitatively.
Shape collapse is a very literal manifestation of scale-invariance (also called “self-similarity”) (Sethna et al., 2001; Beggs and Plenz, 2003; Friedman et al., 2012; Pruessner, 2012; Timme et al., 2016). Avalanches of different durations should rise and fall in the same way on average. This average avalanche profile is called a scaling function. The average avalanche profile for avalanches of duration D is predicted to be A (t,D) = D(γ−1) where D(γ−1) is the power-law scaling coefficient that modulates the height of the profile and is the universal scaling function itself (normalized in time). Shape collapse analysis provides an independent estimate of the scaling relation exponent γSC, which is only expected to be accurate at criticality (Sethna et al., 2001; Scarpetta and de Candia, 2013; Shaukat and Thivierge, 2016), and a visual test of conformation to an empirical scaling function.
Exponent estimation is very sensitive to the unrelated, intermediate rescaling steps involved in combining the avalanches from multiple recordings into one group. To get an estimate of the scaling relation exponent for each group, γSC, we averaged the scaling exponents, γi, found individually for each recording in that group where i denotes the ith recording and SC is shape collapse).
Naturally, individual avalanche profiles are vectors of variable length D. We must first “rescale in time” to make them vectors of equal length without losing track of what each vector's original duration was. We do that by linearly interpolation with 20 evenly spaced points. So, the jth avalanche profile of the ith recording is denoted as a 20-element vector where the top arrow denotes a vector.
Next, the set of all profiles from recording i with the exact same duration D, denoted as ΓDi where bold indicates a set, were averaged and divided by a test scaling factor D(γ′i−1). We define this as (γ′) = 〉D−(γ′i−1). The prime indicates a test rescaling. The average is over all vectors in the set ΓDi. The choice of γi was optimized using MATLAB's fminsearch function to minimize the mean relative error between the average over all durations 〈(γ′)〉 and the set members (γ′) so that for recording i:
This error minimization and applying the rescaling is the “collapse” in “shape collapse.”
Once we have the γi for the avalanches in each individual recording of ongoing activity, we compare the average, γSC = 〈γi〉, with the predicted and fitted scaling relation exponents for the group of recordings, γp and γf (statistical comparison tests are described in a previous section). Thus, quantitative analysis of shape collapse was done by comparing γSC, γp, and γf for each of the 69 groups individually.
Visual assessment of how well avalanche profiles can be described by one universal scaling function, supports the quantitative exponent estimation. This was performed by averaging all the profiles within specific duration bins (regardless of trial or group) and plotting them on top of one another. Shape collapse always requires a very large number of avalanches, so we had to combine avalanches from all 69 groups. However, the resting Vm differs from recording to recording and cell to cell. Therefore, avalanche profiles from different recordings are vertically misaligned. To combine avalanche profiles from different recordings, we divided all the profiles by a scalar value unique to each recording: the time average over all the collapsed profiles. This produced rescaled and mean-shifted profiles, denoted by a double prime, = /〈Γ′ijk〉, where k ∈ [1,20] denotes the interpolated time point. The set of avalanches from each recording were thus aligned, but individual variability was preserved and thus profiles from different recordings could be averaged without introducing artifacts. This set, Γ″ij, contained a total of 106,220 shifted and rescaled profiles for the Vm data.
The set of shifted and rescaled profiles falling into a duration bin is denoted Γ″D. Each duration bin then provides its own estimate of the scaling function ∼ . For each bin, D was defined as the average duration of all constituent profiles. If <700 avalanches had a particular duration, we included the next longest duration iteratively until we met or exceeded 700 avalanches. This only applied to long durations. The choice of 700 was made because it allowed smooth averaging without excessively wide duration bin widths.
We also assessed the mean curvature of avalanche profiles from the rescaled profile for a particular duration . This allowed us to plot how curvature depends on duration. Mean curvature 〈κ〉 is defined as follows, where k still denotes time points:
Model simulations
We simulated a model network consisting of N = 104 binary probabilistic model neurons. The model neurons form a directed random network (Erdős–Rényi random graph) where the probability that neuron j connects to neuron i is c. In a network of N neurons, this results in a mean in-degree and out-degree of cN. We tested nine not quite evenly distributed values of connection probabilities, c ∈ [0.5, 1, 3, 5, 7.5, 10, 15, 20, 25] × 10−2. As discussed previously (Kinouchi and Copelli, 2006; Larremore et al., 2011a, 2014), the impact of connectivity on network dynamics is nonlinear, so we took a finer look at smaller connection probabilities while maintaining thorough coverage of intermediate connection probabilities.
The strength of the connection from neuron j to neuron i is quantified in terms of the network adjacency or weight matrix, W, with the fortune of having a simple and intuitive meaning. For each existing connection from neuron j to neuron i, Wij is the direct change in the probability that neuron i will fire at the next time step if neuron j spikes in the current time step.
The dynamics of this network are well characterized by the largest eigenvalue λ of the network weight matrix W with criticality occurring at λ = 1 (Kinouchi and Copelli, 2006; Larremore et al., 2011a,b, 2012, 2014). The physical interpretation of λ is a “branching parameter”(Haldeman and Beggs, 2005) that governs expected number of spikes immediately caused by the firing of one neuron. If λ = 1, then one spike causes one other spike on average, while, if λ > 1,then one spike causes more than one on average and vice versa.
We tested five different values of largest eigenvalue at, near and far from criticality λ ∈ [0.9, 0.95, 1, 1.015, 1.03]. A fraction, χ, of the neurons is designated as inhibitory. This is done by multiplying all outgoing connections of an inhibitory neuron by −1. We tested nine different values of the fraction of inhibitory neurons in the range from 0 to 0.25, thus including the value 0.2, corresponding to the fraction of inhibitory neurons in the mammalian cortex (Meinecke and Peters, 1987). The magnitudes of nonzero weights are independently drawn from a distribution of positive numbers with mean η, where the distribution is uniform on [0,2η] and η is given by η = λ/(cN(1 − 2χ)). The maximum eigenvalue is then fine-tuned by dividing W by the current maximum eigenvalue and set to the exactly desired value W = λW′/λ′ where W′ and λ′ are the matrices and eigenvalues before correction.
The binary state Si(t) of neuron i at time t denotes whether the model neuron spikes (Si(t) = 1) or does not spike (Si(t) = 0) at time t. At each time step, the states of all neurons are updated synchronously according to the following update rule:
Where ξi(t) is a random number on [0,1] drawn from a uniform distribution, and Θ is the Heaviside step function. In addition to this update rule, a refractory period of one time step (translated to ∼2 ms) was imposed for certain parameter conditions. A simulation begins with initiating the activity of one randomly chosen excitatory neuron and continuing the simulation until overall network activity had ceased. The process was then repeated.
From the simulated binary states of 104 model neurons, we extracted three measures of simulated activity. First, the network activity F(t) = ∑i=1NSi(t) / N is the fraction of neurons spiking at time t. Second, the input to model neuron i at time t is Pi(t) = ∑jNWijSj(t − 1), which is almost always positive for our parameters. Note that Pi′(t) = Pi(t) × Θ(Pi(t)) directly represents the probability for the neruon i to spike at time t. Third, we constructed a proxy for the Vm signal, Φi(t) = (αh * Pi)(t), by convolving the input Pi(t) with an alpha-function: αh(t) = exp with hm = 2 time steps (assumed to be ∼4 ms).
A total of 405 different parameter combinations (connection density, inhibition, maximum eigenvalue) were simulated. Each combination was simulated 10 times. Based on the connection probability c and the fraction of inhibition χ, we distinguish four regions in parameter space classified according to the behavior of the critical model; that is, λ = 1.
The first region is the “positive weights” region. Without inhibition activity increases or dies out in accordance with the branching parameter. This region is defined by χ = 0. With moderate inhibition and dense connectivity, there is a region of parameter space we call “quiet”; activity lasts only slightly longer than in a system with no inhibition. This region is defined by the ex post facto boundaries c ≥ e11χ/25 and χ > 0. Further increasing inhibition relative to connection density produces a behavior like “up and down” states (or “telegraph noise”) (Sachdev et al., 2004; Millman et al., 2010). We call this the “switching” regime because network activity switches between a low mean and a high mean. This region is defined by c < e11χ/25 and c ≥ (10e12χ − 13)/100 and χ > 0. When inhibition is high relative to connection density, the system enters the “ceaseless” region where stimulating one neuron causes activity that effectively never dies out. An especially attractive feature of this model is that the “ceaseless” and “switching” regimes exhibit sustained self-generated activity. This provides a way to model spontaneous neural activity without externally imposed firing patterns.
Refractoriness was studied in the network without inhibition and it was found that dynamic range was inversely proportional to refractory period (Larremore et al., 2011a), but the empirical branching parameter (criticality) displayed no dependence on refractory period (Kinouchi and Copelli, 2006). In studies that featured inhibition and introduced ceaselessness, no refractory was used (Larremore et al., 2014). However, we found that, for some networks in the switching regime, the maximum eigenvalue was a better predictor of the empirical branching ratio if the refractory period was one time step. Because this relationship is central to our understanding of criticality in this model, we ran an initial testing cycle before each simulation begins to decide whether to set the refractory period to one time step or to zero. Doing so ensures that the network displays critical-like phenomena in all regimes (the maximum eigenvalue of connectivity), but also ensures that the model is consistent with prior studies.
We performed avalanche analysis on each of the simulated signals using the methods described above for Vm recordings. If the network was in the switching regime, then we only performed analysis on the periods when the network was in the mode (high or low mean) in which it spent the majority of its time. As before, the 25th percentile defined the avalanche threshold. If the signal had negative values, as in the case of single neuron Vm proxies in networks with inhibition, then the signal was shifted by subtracting the second percentile. To obtain good statistics, we continued stimulating and extracting avalanches until a simulation either reached 104 avalanches or 5 × 103 avalanches and a very large file size or a very long computational time. This ensured that there were between 2000 and 10,000 avalanches per trial.
Data and software accessibility
All raw data are available at https://github.com/jojker/continuous_signal_avalanche_analysis and the code developed for this analysis is available upon request to the corresponding author.
Results
Single-neuron Vm fluctuations are thought to be dominated by synaptic inputs from multitudes of presynaptic neurons (Stepanyants et al., 2002; Brunel et al., 2014; Petersen, 2017). Because the way neurons integrate their diverse inputs is central to information processing in the brain, it is important that neuroscience gain a thorough understanding of the relationship between subthreshold Vm fluctuations and population activity. A basic step is to compare statistical analyses, especially analyses in which a meaningful relationship is expected. We investigated whether an avalanche analysis on Vm fluctuations would reveal the same signatures of scale-freeness and critical network dynamics found in measures of population activity (Fig. 1) (Friedman et al., 2012; Shew et al., 2015; Marshall et al., 2016). To address this comparison across organizational levels, we recorded Vm fluctuations from 51 pyramidal neurons in visual cortex of 14 turtles and assessed evidence for critical network dynamics from these recordings.
In a model investigation we corroborated results evaluated the conditions needed to enable inferring dynamical network properties from the inputs to single neurons. Finally, we extended the analysis to other commonly recorded time series of neural activity for comparison with the information content of Vm fluctuations about the dynamical network properties.
Vm fluctuations reveal signatures of critical point dynamics
We obtained whole-cell recordings from pyramidal neurons in the visual cortex of the turtle ex vivo eye-attached whole-brain preparation (Fig. 2A). Recorded Vm fluctuations taken in the dark (no visual stimulation) were interpreted as ongoing activity. We analyzed the recorded ongoing Vm fluctuations employing the concept of “neuronal avalanches” (Beggs and Plenz, 2003; Poil et al., 2012; Shew et al., 2015), which are positive fluctuations of network activity. For continuous time series such as the Vm recording, one selects a threshold and a baseline. We defined a neuronal avalanche based on the positive threshold crossing followed by a negative threshold crossing of the Vm time series (Poil et al., 2012; Hartley et al., 2014; Larremore et al., 2014; Karimipanah et al., 2017a). We quantified each neuronal avalanche by its size, A, defined as the area between the curve and the baseline, and its duration D, defined as the time between threshold crossings (Fig. 2B).
To quantify the statistics of avalanche properties, we applied concepts and notations from the field of “critical phenomena” in statistical physics (Nishimori and Ortiz, 2011; Pruessner, 2012). Because the critical point is such a small target for any naturally occurring self-organization (Pruessner, 2012; Hesse and Gross, 2014; Cocchi et al., 2017) and there is considerable risk of false positives (Taylor et al., 2013; Hartley et al., 2014; Touboul and Destexhe, 2017; Priesemann and Shriki, 2018), asserting criticality in a new system or with a new tool requires extraordinary evidence. Because this is a new tool, we created four criteria and set quantifiable standards for concluding a system is consistent with criticality based on avalanche power laws and completed this exhaustive battery of tests with shape collapse, a geometrical analysis of self-similarity in the avalanche profiles (see “Experimental design and statistical analysis” section).
In brief, we found that both the size and duration distributions of the fluctuations treated as avalanches were consistent with power laws (Fig. 2C), P(A) ∝ A−τ and P(D) ∝ D−β matching widely reported exponents (Beggs and Plenz, 2003; Priesemann et al., 2009, 2014; Hahn et al., 2010; Klaus et al., 2011; Friedman et al., 2012; Shriki et al., 2013; Arviv et al., 2015; Shew et al., 2015; Karimipanah et al., 2017a,b), obeyed the scaling relation (Fig. 2D), and exhibited shape collapse over an expansive set of durations (Fig. 2E).
Specifically, of the 51 recording groups featuring data from the first 20 min period of recording from one cell, 98% had power laws in both size and duration distributions. The exponent values for the size distribution were τ = 1.91 ± 0.38 (median ±SD). Exponent values for the duration distribution were β = 2.06 ± 0.48. Of the 51 neurons with a recording group from the first 20 min, 18 had an additional 20 min period spanning multiple recordings. All these 18 groups had power laws in both size and duration; the exponent values for the size distribution were τ = 1.87 ± 0.29 and the exponent values for the duration distribution were β = 2.21 ± 0.39.
It is also important to confirm that power-law behavior extends across several orders of magnitude of avalanche durations. We demonstrate a power-law distribution over 2.45 ± 0.39 orders of magnitude of duration. For the scaling relation, we found a larger span with 2.62 ± 0.23 orders of magnitude across our whole avalanche duration range.
Another statistic crucial to signatures of criticality measures the relationship between the power laws describing size and duration of avalanches (Sethna et al., 2001; Beggs and Timme, 2012; Friedman et al., 2012). If the average avalanche size also scales with duration according to 〈A〉(D) ∝ Dγ, then the exponent γ is not independent, but rather depends on the exponents τ and β according to γ = (β − 1)/(τ − 1) regardless of criticality (Scarpetta et al., 2018). For critical systems this condition is enforced because avalanche profiles follows the same shape for all durations which means that this prediction is believed to be more precise than for non-critical systems and the exact values are important (Sethna et al., 2001; Nishimori and Ortiz, 2011). We found that average avalanche size scaled with duration 〈A〉(D) ∝ Dγ according to a power law and that the observed values of τ and β provided a good prediction γ = (β − 1)/(τ − 1) of the fitted γ (Fig. 2D).
Specifically, of the 51 recording groups from the first 20 min period, the fitted scaling relation exponents were γf = 1.19 ± 0.05 and the predicted scaling relation exponents were γp = 1.17 ± 0.35. For the additional second 20 min period (18 groups/neurons), the fitted scaling relation exponents were γf = 1.21 ± 0.05 and the predicted scaling relation exponents were γp = 1.28 ± 0.21.
To affect a more convincing analysis, we defined four stringent criteria that must be independently satisfied before any set of avalanches can be deemed consistent with network dynamics near a critical point (see “Experimental design and statistical analysis” section). Overall, of the 69 groups of recordings (which includes 18 out of 51 cells twice), 98.6% had power laws in both the size and duration distributions of avalanches and 92.8% had scaling relations that were well fit by power laws (R2 > 0.95). All were deemed nontrivial by the test (γf − 1) > σγf where σγf is the dataset SD and σγf = 0.051. The smallest value was γf = 1.094. The fourth constraint, that the R2 of the predicted scaling relation was within 10% of the best-fit scaling relation, was satisfied 85.6% of the time. Together, this set of criteria cannot measure distance from a critical point nor eliminate false positives. However, the take away is that 81% of all recording groups examined were judged to be consistent with network activity near a critical point.
Separating out results: 76% of the 51 recording groups from the first 20 min period and 94% of the recording groups from the second 20 min period were judged consistent with criticality. The general pattern is that the first 20 min period and the second are both consistent with criticality, but the second group meets our criteria much more frequently. This could be an effect related to the length of time we are able to maintain a patch or it could be that a better patching results in both longer stable recording ability and better inference of dynamical network properties.
To further discount the possibility of false positives we investigated whether the avalanches within our dataset exhibited “shape collapse” (Fig. 2E). The scaling relation is a consequence of self-similarity (Sethna et al., 2001; Papanikolaou et al., 2011; Friedman et al., 2012; Marshall et al., 2016; Shaukat and Thivierge, 2016; Cocchi et al., 2017). In other words, avalanches all have the same “hump shape” no matter how long they last; this shape is called the scaling function or avalanche profile. The shape collapse also provides an independent estimate of the scaling relation exponent γ. If estimated exponent, γSC, matches the fitted exponent, γf, then it is considered strong evidence of critical point behavior. For critical systems, the average avalanche profile of an avalanche of duration D is given as A(t,D) = D(γ−1) where D(γ−1) is a coefficient governing the scaling of height with duration, and is the scaling-function that describes the universal shape of an avalanche at any duration. The similarity of avalanche profiles of different durations is qualitatively judged (Sethna et al., 2001; Beggs and Plenz, 2003; Friedman et al., 2012; Pruessner, 2012; Timme et al., 2016) by plotting empirically estimated scaling functions for several durations on top of one another after they have been rescaled as part of the process of estimating γSC.
We obtained shape collapse across more than one order of magnitude (between ∼50 and 700 ms) of avalanche durations. Below 50 ms, distinct peaks arose. Above 700 ms, the profile height grew faster than the power-law scaling that worked for shorter duration avalanches. This is observed as an apparent outlier in Figure 2E. This likely marks the point where avalanches become so long and so large that they begin to weakly activate the nonlinear action potential mechanism of the neuron. When comparing to plausible alternatives to Vm in later sections, we included analysis of mean curvature and avalanche profile peak height along with visual inspection of shape collapse quality (Fig. 2E). The shape collapse plots begin with short avalanches (20 ms) that are below the median lower cutoff for power-law behavior (which was 256 ms) but are well predicted by the scaling relation.
The exponents estimated from the shape collapse were a good match for both the predicted and fitted scaling relation exponents. The groups of recordings from the first 20 min yielded γSC = 1.1868 ± 0.042. The average matched absolute percentage error was 1.3% with respect to γf. A matched signed-rank difference of median test revealed that γf was not significantly different from γSC, simple difference effect size rSDF = 0.089, p = 0.063 (where p < 0.05 indicates that they are different).
This stage of the analysis showed that, when fluctuations of Vm are treated like neuronal avalanches, they are consistent with criticality by the standards of power laws governing size and duration. We also showed that Vm avalanches exhibit geometrical self-similarity across more than one order of magnitude. These factors showed that the cortical circuits driving fluctuations of Vm are consistent with activity near a critical point according to standards of self-similarity. In our next investigation, we compared with population data from microelectrode arrays and other results from the literature to determine whether Vm fluctuations are consistent with the universality requirement of behavior near critical points and if they can be used to measure dynamical network properties.
Vm fluctuations are consistent with avalanches from previously obtained microelectrode array LFP recordings
We sought to interpret our results from the analysis of single-neuron Vm fluctuations in the context of the more commonly used analysis of multiunit spiking activity (Friedman et al., 2012; Shew et al., 2015; Marshall et al., 2016; Karimipanah et al., 2017a) or multisite LFP event detection from MEA data (also known as “multielectrode array”) (Beggs and Plenz, 2003; Shew et al., 2015).
In a previous study, avalanche analysis was performed on LFP multisite MEA recordings from the visual cortex of a different set of 13 ex vivo eye-attached whole-brain preparations in turtles (Shew et al., 2015). Avalanches were inferred from the steady-state (after on response transients but before off response transients) of responses to visual presentation of naturalistic movies as opposed to the resting-state activity between presentations (which is where the Vm data come from). Avalanche size and duration distributions followed power laws.
The median exponents were τ = 1.94 ± 0.27 for the avalanche size distributions and β = 2.14 ± 0.32 for the avalanche duration distributions (Fig. 3A). A scaling relation existed with average exponent γf = 1.20 ± 0.06 fitted to the data and γp = 1.19 ± 0.07 from the average of the predicted scaling based on theory. The scaling power-law extended over one to two orders of magnitude. Critical branching was more firmly established in Shew et al. (2015) by analyzing the branching ratio. The branching ratio is the average ratio of events (i.e., spikes) from one moment in time to the next, but only during identified avalanches. A critical branching network has a branching ratio of one, but empirically estimating it requires discrete events and an assiduous choice of time binning for analysis. Shew et al. (2015) found that a branching ratio near one that was robust to reasonable choices of time bin and varied with choice of time bin in expectation with critical branching. We are not aware of methods for estimating a branching ratio in continuous signals like Vm.
The set of avalanche size, duration, and scaling relation exponents obtained from Vm fluctuations (Fig. 3B) were not distinguishable from the MEA obtained set. The fitted scaling relation exponent γf had the least variability of all three kinds of exponents, so it is the most likely to show a difference. Thus, if a difference is not significant, then this suggests universality more strongly than for the avalanche size τ or duration β distribution exponents.
When we limited our analysis to the first 20 min period that contained multiple recordings (51 cells), neither the fitted scaling relation exponent nor the predicted scaling relation exponent were significantly different from the MEA results. The Wilcoxon rank-sum difference of medians test against the MEA data yielded (rSDF = 0.164, p = 0.37), and (rSDF = 0.08, p = 0.67), respectively. The median exponent values for the size and duration distributions were not significantly different from the median of the MEA data (rSDF = 0.164, p = 0.37) and (rSDF = 204, p = 0.265), respectively.
These results establish Vm fluctuations as an informative gauge of high-dimensional information while also demonstrating that the power-law characteristics are universal properties of the brain by showing a close match between data at different scales and under different conditions. Further underscoring universality, our results are also similar to the critical exponents measured from other animals such as the τ = 1.8 result from in vivo anesthetized cats (Hahn et al., 2010). Although an exhaustive literature search was not conducted here, others have conducted incomplete surveys (Ribeiro et al., 2010; Priesemann et al., 2014).
Single-neuron estimate of network dynamics is optimized at the network critical point
To gain a deeper insight into the relation between single-neuron input and network activity, we investigated a model network of probabilistic integrate and fire model neurons (Kinouchi and Copelli, 2006; Larremore et al., 2011a,b, 2012, 2014; Karimipanah et al., 2017a,b). This model network contains fundamental features of cortical populations, such as low connectivity, inhibition, and spiking while being sufficiently tractable for mathematical analysis (see “Model simulations” section).
In brief, the model network consists of N = 104 binary probabilistic model neurons (Fig. 4A). The connection probability c results in a mean in-degree and out-degree of cN. The connection strength from neuron j to neuron i is quantified in terms of the network adjacency matrix W. Each connection strength Wij is drawn from a distribution of (initially) positive numbers with mean η, where the distribution is uniform on [0,2η]. A fraction χ of the neurons are designated as inhibitory; that is, their outgoing connections are made negative. The binary state Si(t) of neuron i is updated according to Si(t) = Θ(∑jNWijSj(t − 1) − ξi(t)) where ξi(t) is a random number between 0 and 1 drawn from a uniform distribution and Θ is the Heaviside step function.
The largest eigenvalue, λ = ηcN(1 − 2χ), of the network adjacency matrix W characterizes the network dynamics, with critical network dynamics occurring at λ = 1. This tuning parameter λ controls the degree to which spike propagation “branches”: λ = 1 means that one spike creates one other spike on average, λ > 1 implies that one spike creates more than one other spike, whereas λ < 1 means that one spike creates less than one other spike (Haldeman and Beggs, 2005; Kinouchi and Copelli, 2006; Levina et al., 2007; Larremore et al., 2011b, 2012; Kello, 2013).
The input to model neuron i is Pi(t) = ∑jNWijSj(t − 1) and provides the link between network activity and single-neuron activity. From this we can derive a simple mathematical result characterizing how estimation of network properties is optimized at criticality.
If we let Ki(t − 1) denote the number of active neurons in the presynaptic population of neuron i, then we can rewrite the input to a model neuron as a sum of independent and identically distributed random variables drawn from the nonzero entries of W: Pi(t) = ∑kKi(t − 1)Wijk. After implementing inhibition by inverting some elements of W, the distribution of weights is not uniform but piecewise uniform. Weights are drawn uniformly from the interval [−2η,0] with probability χ and from the interval [0,2η] with probability 1 − χ. The mean of the nonzero entries of W are denoted with a prime so that the mean is 〈Wij′〉 = η(1 − 2χ) and the SD is . Now we can find the mean behavior of the input integration function as it relates to the presynaptic population:
We learn three things by examining the mean behavior of the input integration function. First, the mean grows as O(Ki) but the SD grows as the root O(), so the function becomes a more precise estimator of network activity with increasing activity in the presynaptic population (increasing Ki). Second, the input integration function Pi(t) is rarely negative. At the parameter combination c = 0.005 and χ = 0.25 (which has the largest variance relative to the mean), the mean becomes >1 SD larger than 0 when Ki > 5. Third, and most importantly, the input integration function is an averaging operator and the tuning parameter λ biases that averaging operation. To show this, we only need two observations: the instantaneous firing rate averaged over the presynaptic population is the number of active neurons divided by the expected total number of presynaptic neurons, ωi(t) = Ki(t)/cN. Next, we rearrange the definition of λ to get λ̸cN = η(1 − 2χ). Substituting these two observations into the mean behavior of our input integration function we get the key mathematical result:
Note that Pi′(t) = Pi(t) × Θ(Pi(t)) directly represents the probability for the neuron to spike at time t.
These results demonstrate that the inputs to a neuron Pi and the instantaneous firing rate of that neuron are the result of an averaging operator acting on the presynaptic population, which is a subsample of the network. Furthermore, the tuning parameter λ not only modulates the relationship of single neuron firing to downstream events (also known as branching), but also governs how the input to a neuron relates to the presynaptic population. It biases the averaging operator to either amplify firing rate (λ > 1) or dampen it (λ < 1). Therefore, our model implements both critical branching and the inverse of the critical branching condition, a critical coarse-graining condition. The model is a network of subsampling operators that only capture whole-system statistics when λ = 1 and the operators reflect an unbiased stochastic estimate of mean firing rate among the subsample (the presynaptic population). This averaging operation may exist in many kinds of networks, including those with structure and those that are not critical branching networks, so this result helps to establish plausible generalizability.
To further evaluate the relation between single-neuron input and network activity under different conditions, we simulated the described network of 104 model neurons for a total of 405 different parameter combinations, including connection probability, inhibition, and maximum eigenvalue (Fig. 4A), each parameter combination was repeated 10 times. We then compared the avalanche analysis results of simulated network activity F(t) = ∑i=1NSi(t) with the input to a single neuron (the input integration function). However, Pi(t) = ∑jNWijSj(t) is the probability that neuron i will fire at time t, also known as the instantaneous firing rate of neuron i.
Vm is not a direct representation of firing rate, but rather the firing rate is related to synaptic input through the F–I curve which is nonlinearly related to Vm. This nonlinearity could destroy the correspondence between the simulated single neuron signal and network activity. To better facilitate comparison of the simulated input integration function with the experimentally recorded Vm, we constructed a proxy for the subthreshold Vm, Φi(t), of a model neuron by convolving the simulated input Pi(t) with an alpha-function (see “Model Simulations” section).
The parameter space has four distinct patterns of critical network behavior (Fig. 4B). Qualitatively, these were reflected in the network activity. The presence of these paradoxical behaviors may indicate the presence of second phase transition tuned by the balance of excitation to inhibition (Shew et al., 2011; Poil et al., 2012; Kello, 2013; Hesse and Gross, 2014; Larremore et al., 2014; Scarpetta et al., 2018). Several key results differ strongly and thus are reported separately for these regions of parameter space.
These regions are defined in terms of the connection density and inhibition and are shown in Figure 4B. First is the “positive weights” region, where there is no inhibition (χ = 0) and the network is a standard critical branching network. The second region, “quiet,” has a small increase in the fraction of inhibitory neurons. Activity lasts slightly longer than for the classically critical network. The third region is called the “switching” regime because network activity switches between a low mean and a high mean (like “up and down states”; Destexhe et al., 2003; Millman et al., 2010; Larremore et al., 2014; Scarpetta et al., 2018). This occurred in the middle portion of the values of connectivity and inhibition. Last, we have the “ceaseless” region, with a large fraction of inhibition relative to connection density, where activity never dies out. This region is defined by c < (10e12χ − 13)/100 and χ > 0. Three of these regimes are displayed in Figure 5A; the “quiet” region is mostly redundant to the “positive weights” region. The “ceaseless” and “switching” regimes exhibit sustained self-generated activity and are included with the intention to model ongoing spontaneous activity dynamics without contamination by externally imposed firing patterns (Mao et al., 2001).
We looked at the magnitude of relative error between estimated exponents for the avalanche size distribution (Fig. 4C) to determine how well our proxy neural inputs, φi(t), reflected network activity, F(t), in different parameter regions, and with different values for the tuning parameter, λ. Importantly, the least error occurred for λ = 1 with and without the presence of inhibitory nodes. This insensitivity to parameter differences supports the claim (Larremore et al., 2014) that the system becomes critical when λ = 1 even in the presence of inhibition.
However, the four regions of parameter space perform differently according to our four standardized criteria for consistency with criticality. In the “positive weights” region, 90% of 90 trials (nine points in parameter space with 10 trials per point) have network activity that meets all four criteria when the tuning parameter is set at criticality (λ = 1) (Fig. 4C). A total of 39% meet the criteria in the “ceaseless” region, 19% in the “quiet” region, and 67% in the “switching” region, which may indicate the location of a second phase transition and shows that evidence for precise criticality in this model is limited once inhibition is included.
As we vary the tuning parameter, we can clearly distinguish critical from noncritical systems. Overall, 47% of trials met all four criteria when λ = 1, whereas 3% did when λ = 0.95, 18% when λ = 1.015, 1% when λ = 0.9, and 1% when λ = 1.03 (Fig. 4D).
The estimated power-law exponents show that the avalanche size distributions for F(t), Pi(t), and Φi(t) are most alike at criticality. Note that estimated exponents serves as the “scaling index,” a measure of the heavy tail even when a power-law is not the statistical model that fits best (Jeżewski, 2004). The fact that matching between network activity and the input integration function was best at criticality is important because it underscores the scale-free nature of critical phenomena and contrasts with the results obtained when testing subsampling methods with a different relationship to network structure (Priesemann et al., 2009; Yu et al., 2014; Levina and Priesemann, 2017).
While the system was both critical (λ = 1) and in the positive weights region, our Vm proxy Φi(t) met all four criteria for consistency with criticality 74% of the time for 90 trials (Fig. 4D), whereas Pi(t) met all four only 1% of the time. The network activity had avalanche size and duration exponent values τF = 1.43 ± 0.04 and βF = 1.87 ± 0.09 (Fig. 4D) and had a fitted scaling relation exponent, γFf = 1.83 ± 0.02 and a predicted exponent γFp = 1.99 ± 0.23. The Vm proxy, Φi(t), had slightly lower avalanche size and duration exponent values that fluctuated around the paired network values, τΦ = 1.40 ± 0.06 and βΦ = 1.73 ± 0.17 (Fig. 4D), and exclusively lower scaling relation exponents γΦf = 1.68 ± 0.02. Although the unsmoothed Pi(t) varied considerably more it had size and duration exponents that were almost exclusively higher than the paired network values, τP = 1.87 ± 0.50 and βP = 1.87 ± 0.50 with a fitted scaling relation exponent that was exclusively lower, γPf = 1.68 ± 0.02.
In Figure 5, we compared different population dynamics estimation techniques by looking at avalanches inferred from Pi(t) (the inputs to neuron i), and the Vm proxy Φi(t). Both Pi(t) and Φi(t) fluctuate about F(t), but Pi(t) is much noisier (Fig. 5A); in the ceaseless regime, Pi(t) and Φi(t) are systematically offset. Avalanches inferred from Φi(t) had average sizes that scaled with duration (Fig. 5B). Avalanches from Φi(t) consistently had duration and size distribution exponents that were closer to network avalanches than avalanches from Pi(t). However, Pi(t) performed satisfactorily in the sense that its error was systematically offset and best at criticality (Fig. 5C).
Including inhibition introduced several important differences. For the ceaseless region with λ = 1, far fewer trails met our criteria; however, Pi(t) followed F(t) much more closely. The network activity had avalanche size and duration exponent values τF = 1.48 ± 0.09, and βF = 1.53 ± 0.09 and had a fitted scaling relation exponent, γFf = 1.23 ± 0.11. The Vm proxy, Φi(t), had slightly higher avalanche size and duration exponent values that fluctuated around the paired network values, τΦ = 1.51 ± 0.19 and βΦ = 1.57 ± 0.17, but nearly identical scaling relation exponents γΦf = 1.23 ± 0.11. Although the unsmoothed Pi(t) varied considerably more, it had size and duration exponents that were almost exclusively higher than the paired network values, τP = 1.88 ± 0.20 and βP = 2.18 ± 0.34, with a fitted scaling relation exponent that was slightly lower, γPf = 1.19 ± 0.07.
When λ ≠ 1, both Φi(t) and Pi(t) failed to meet all four criteria for criticality at the same high rate as F(t) (to within 1%). This lack of false positives confirms that these signals are useful for characterizing critical branching. In Figure 4B, we calculated the absolute magnitude of relative error between the size exponent from avalanche analysis performed on F(t) and Φi(t). As expected, the avalanches were usually not power laws according to our standards; in this case, the exponent is known as the “scaling index” and describes the decay of the distribution's heavy tail (Jeżewski, 2004).
When we set λ = 0.95, we see a moderate deterioration in the ability of either Φi(t) or Pi(t) to recapitulate network exponent values. The error is no longer systematic; so they cannot be used to predict network values. The variability of the exponents increases greatly for Φi(t), whereas it decreases for Pi(t). The exponent error increases slightly over the λ = 1 case and the base of the distribution is much broader.
Reducing λ further, to λ = 0.90, the input integration function, Pi(t) ∼ λωi(t − 1), rapidly dampens impulses (ωi is the instantaneous firing rate over the presynaptic population for neuron i). Variability continues to increase and a systematic offset does not return. Exponent error is now much broader. With branching this low, events often are not able to propagate to the randomly selected neuron; an exception is the “ceaseless” regime where activity is still long lived.
When we set λ = 1.015, we see a dramatic deterioration in the ability of either Φi(t) or Pi(t) to recapitulate network values. Variability in exponent estimation increases for both Φi(t) and Pi(t). Exponent error increases rapidly, underscoring the inability to estimate network activity from neuron inputs.
Increasing λ further to λ = 1.03 produces an input integration function, Pi(t) ∼ λωi(t − 1), which rapidly amplifies all impulses and the network saturates. The effect is that variability in the estimated exponents decreases and a systematic offset returns, with both Φi(t) and Pi(t), producing exponents that are exclusively and considerably higher than network values. Exponent error reveals that estimating network properties from the inputs to a neuron is probably not possible for supercriticality in this model.
The results here show that the Vm proxy represents an effective way of subsampling network flow. This is a hallmark of the near-critical region in the PIF model and a manifestation of scale-freeness. Criticality in our model corresponds to the point when the inputs to a neuron represent an average of the activity of the presynaptic population. Importantly we explored why it works, as well as showing that it does work in experimental data. This analysis, presented in forthcoming sections, uncovered that proper temporal and spatial aggregation is important as is the role of inhibition in Vm dynamics. This supports both the criticality hypothesis and tight balance (Barrett et al., 2013; Boerlin et al., 2013; Denève and Machens, 2016). Additionally, it has specific implications for the information content of Vm.
Predicted scaling relation exponent is more stable than avalanche size or duration exponents
A key part of the study of criticality in neural systems is the assumption that biological systems must self-organize to a critical point. The precise critical point is a very small target for a self-organizing mechanism in any natural system. Therefore, a key question is whether the self-organizing mechanism of the brain prioritizes efficiently achieving information processing advantages of scale-free covariance at the expense of being slightly sub or supercritical (which is a larger target) (Priesemann et al., 2014; Tomen et al., 2014; Williams-García et al., 2014; Gautam et al., 2015; Clawson et al., 2017).
Our data offered unexpected insight. It is known that so long as three requirements are met the scaling relation will be marginally obeyed: Avalanche size and durations must be power-law distributed (with exponents τ and β, respectively) and average size must scale with duration according to a power-law with exponent γ. Given those three requirements one can derive a prediction for the scaling exponent, γp = (β − 1)/(τ − 1) without needing to assume criticality (Scarpetta et al., 2018). However, without any other assumptions, one expects β and τ to be independent, so plotting one against the other should make a point-cloud that is symmetrical, not stretched along a trendline (Fig. 3).
We analyzed the independence of τ, β, and γ measured from experimental data (where self-organization is hypothesized) and compared it with model data (where self-organization is impossible, but criticality is guaranteed). We found that β and τ are more independent and the predicted scaling relation is more variable for the model than for experimental data in which β and τ covary, apparently to maintain a fixed scaling relation prediction.
The previous multisite LFP recordings displayed a range of values for the avalanche size τ and duration β distribution exponents across the tested brain preparations. Interestingly, the exponent values were not independent; rather, the duration exponent varied linearly with the size exponent (Shew et al., 2015) (Fig. 3A). The single-neuron Vm fluctuations reported here produced a similar linear relationship between size and duration exponent (Fig. 3B). Algebraic manipulation of the predicted scaling exponent γp = (β − 1)/(τ − 1) provides a clue. If the scaling relation (β − 1) = γ(τ − 1) is obeyed and if γp is a fixed universal property, then the linear relationship βj ∝ 〈γp〉τj holds across different cells and animals.
To demonstrate this important result, variability in the predicted scaling relation is much less than expected, we propagate errors and assume independent β and τ. We would expect the SD of γp to be σγp* ∼ ∼ 0.72, which is approximately twice the real value in Vm data, σγp ∼ 0.35.
The Pearson correlation, ρ, confirms strong dependence between τ and β, ρτβ = 0.61, p = 2.57 × 10−6 for the Vm data, whereas for the MEA data ρτβ = 0.96, p =1.01 × 10−7. From this, we confirm what Figure 3 shows: the variability in τ and β are not independent and this implies the existence of an organizing principle connecting τ to β. Whatever the principle may turn out to be, one of its effects is the maintenance of low variability in γp at the expense of greater variability in τ and β.
A principle reason to suspect self-organization is that this trend is not seen in the model results. Importantly, τ and β are independent of the scaling-relation exponent function, although still weakly correlated. In this model, there is no adaptive organizing principle driving this network to criticality, instead the structure is fixed and set to be at the critical point. This shows how systems behave in the absence of self-organization. No parameter is being maintained at low variability at the expense of other parameters.
Limiting ourselves to simulated network activity for the λ = 1 case without inhibition (Fig. 4C), propagation of errors leads us to expect the SD of the scaling-relation prediction to be σ*γFp ∼ 0.27, which is very close to real value of σγFp ∼ 0.23. The correlation is statistically significant at the 5% level, but much smaller ρτβ = 0.23, p = 0.027. This was noted in the original study (Shew et al., 2015), where the authors were able to reproduce the linear trend between avalanche size and duration exponents by simulating a network with synaptic depression to adaptively restore critical behavior after an increase in network drive. They showed that the trendline is produced by corrupting their simulated data via randomly deleting 70–90% of spiking events and then changing the way that they group events in time (adaptive time binning). Our Vm fluctuations have no counterpart to the adaptive time binning other than the intrinsic membrane time constant, which is not manipulated experimentally.
In conclusion, the linear trend between avalanche size and duration exponents is not a universal property of critical systems because it was not found in the model. This suggests that the linear trend is enforced by an organizing principle at work in the brain but absent in the model. This principle prioritizes maintaining stability in either the scaling of avalanche size with duration, or the power-law scaling of autocorrelation which is closely related to the scaling relation and scale-free covariance via the power-law governing autocorrelation (Bak et al., 1987; Sethna et al., 2001).
Nonlinearity and temporal characteristics such as high-order correlation, proper combination of synaptic events, and signal timescale are required to reproduce network measures from single-electrode recordings
To demonstrate that subthreshold Vm fluctuations can be used as an informative gauge of cortical population activity, it is necessary to compare against alternative signals that have either been used by experimentalists as a measure of population activity or that share some key features of Vm but are missing others. By making these comparisons, we can illuminate which features of the Vm signal are responsible for its ability to preserve properties of cortical network activity. Additionally, it is necessary to determine whether the statistical properties of avalanches can be explained by random processes unrelated to criticality. To address these points of the investigation, we analyzed five surrogate signals: single-site LFP recorded concurrently with the Vm recordings, two phase-shuffled versions of Vm recordings, computationally inferred excitatory current, and the same inferred excitatory current further transformed to match Vm autocorrelation (which tests the role of IPSPs by making a Vm-like signal that lacks them).
Negative fluctuations of LFP disagree with Vm and MEA results and are inconsistent with avalanches in critical systems
The first alternative hypothesis to test is whether the LFP could yield the same results. We used low-pass filtered and inverted single site LFP, which is commonly believed to measure local population activity. However, in our analysis, it did not recapitulate the results from either MEA or Vm avalanche analysis. We obtained viable single-site LFP recordings (see “Extracellular recordings” section), simultaneous and adjacent with whole-cell recordings, for 38 of the 51 neurons reported above. We performed avalanche analyses on the LFP recordings using a procedure like the one described for the Vm recordings (see “Intracellular recordings” section) (Fig. 6). LFP recordings were grouped in the same way that Vm recordings were to match them for comparison. However, the numbers of recordings are not the same because there were two or three cells being patched alongside (within 300 μm) one extracellular electrode and there was not always a simultaneous LFP recording. LFP also produced more avalanches per 2–5 min recording, NAV = 1128 ± 348. There are 23 20 min periods spanning multiple LFP recordings. These recordings were gathered into groups and matched against 49 Vm recording groups (38 from the first 20 min period, 11 from the second). Additionally, there were 16 20 min periods spanning only one LFP recording but with >500 avalanches. The concurrent Vm recordings did not have enough avalanches. This gives us 39 LFP avalanche datasets.
The LFP recording groups performed poorly according to our four criteria for consistency with criticality. Of the 39 LFP recording groups, only 41% had acceptable scaling relation predictions and only 36% met all four standard criteria for criticality (Fig. 7A). The additional criterion of shape collapse was not observed (Fig. 6C); there was no linear trend among the exponents governed by the scaling relation and the exponents did not match MEA data (Fig. 3A). However, 85% produced power-law fits for size and duration, 92% had scaling relations well fit by power laws, and all were nontrivial. We expect from previous data (Touboul and Destexhe, 2017) that some fraction of noncritical data will pass the four standard criteria by chance as long as the data have a 1/f power spectrum.
To emphasize that these results are chance, we can limit ourselves to just those with the best chance of meeting the scaling relation criteria by picking those that have power laws in the size and duration distributions. This is enough to expect the scaling relation to be obeyed if mean size scales geometrically with duration (Scarpetta et al., 2018). It is still the case that only 42% of recording groups meet the three remaining standard criteria for consistency with criticality. Therefore, having power laws is statistically independent of meeting the other criterion for consistency with criticality.
Not only does the single-site LFP data differ from MEA and Vm data because it fails to demonstrate consistency with criticality, it is also the case that the scale-free properties that do exist are not representative of the MEA data or the simultaneous Vm recordings. The failure was not because LFP recordings cooccurred with decreased consistency with criticality more generally. Eighty-one percent of the matched Vm recordings met all the criteria, whereas 58% of the LFP recordings did, a statistically significant dissimilarity (rOR = 7.65, p = 1.1 × 10−5).
The estimated exponents from all 39 LFP recording groups were highly variable. The duration distribution and scaling relation were most dissimilar to Vm and MEA data. Of the 33 LFP groups that were power-law distributed, the avalanche size exponent had a median value of τ = 1.90 ± 0.63, whereas the duration exponent was β = 1.41 ± 0.9 (very low) (Fig. 7A) and the fitted exponent was γf = 1.11 ± 0.02. The predicted scaling-relation exponents were inaccurate, with γp = 0.89 ± 0.76 for the subset of recording groups that had power laws.
The extreme variability makes it difficult to determine whether the size and duration exponents match other data, but the fitted scaling relation exponent was much less variable and more clearly separated from MEA or Vm results. The matched difference of median test (Wilcoxon signed-rank) between 49 recording groups found that the best fit τ, (τ = 1.90 ± 0.63), was not significantly distinguishable from the Vm data (rSDF = 0.15, p = 0.33), but β, (β = 1.41 ± 0.9), was dissimilar with a comparable effect size (rSDF = 0.17, p = 0.028); γf, (γf = 1.11 ± 0.02), was also dissimilar (rSDF = 0.25, p = 7.1 × 10−15).
When comparing against the 13 samples of MEA data γf was significantly different from the MEA data (rSDF = 0.88, and p = 9.21 × 10−08). This contrasts with our comparison between Vm and MEA data. In that case the scaling relation was not distinguishable even with 51 points of comparison and very low variability making a difference easier to detect. However, because of their extreme variability the size and duration exponents fail a 5% significance threshold for distinguishing from the MEA data by a Wilcoxon rank-sum result (rSDF = 0.06, p = 0.766 for τ and rSDF = 0.29, p = 0.123 for β). This failure of inverted LFP to show the same statistical properties as multiunit activity may add a caveat to the assumptions behind the use of inverted LFP as a proxy for population activity (Kelly et al., 2010; Einevoll et al., 2013; Okun et al., 2015). Specifically, the amplitude of single-electrode negative LFP excursions is ambiguously related to the number of spiking neurons, whereas the use of electrode arrays as described previously (Beggs and Plenz, 2003; Shew et al., 2015) is more appropriate.
To summarize, single-site LFP fluctuations result from the superposition of local spiking and extracellular synaptic current from juxtaposed network elements (Kajikawa and Schroeder, 2011; Einevoll et al., 2013; Pettersen et al., 2014; Ness et al., 2016). These fluctuations were found to be less informative about the network dynamics than single-neuron Vm fluctuations. Vm fluctuations result from the superposition of EPSPs and IPSPs, indicating neuronal responses propagating in a manner consistent with the true neural network architecture. In other words, synaptic and spiking events driving fluctuations at single extracellular electrodes may be too badly out of sequence and distorted to faithfully represent neuronal avalanches, whereas the sequence of synaptic and spiking events driving somatic Vm fluctuations is functionally relevant by definition.
Stochastic surrogates are distinguishable from Vm or MEA results, revealing the importance of nonlinear filtering
After eliminating inverted LFP as an alternative single-electrode signal, it was important to establish whether our results could have been created from a linear combination of independent random processes (Touboul and Destexhe, 2017; Priesemann and Shriki, 2018) similar to those used when contesting evidence for critical brain dynamics (Bédard et al., 2006; Touboul and Destexhe, 2010, 2017). We also wanted to learn what effects nonlinearity (non-Gaussianity) has in signals such as Vm.
To address these questions, we used both the AAFT and UFT phase shuffling algorithms (see “Experimental design and statistical analysis” section). AAFT (Fig. 6) preserves both the exact power spectrum (autocorrelation) of the signal and nonlinear skew of signal values, but randomizes the phase (higher-order temporal correlations). UFT is the same but forces the distribution of signal values to be Gaussian. Using both allows us to attribute some characteristics to nonlinear rescaling and others to precise temporal correlation structure.
Phase shuffling tends to preserve power laws since it explicitly preserves the 1/f trend of the power spectrum. However, the matched signed-rank test reveals that the values of the exponents change in both methods. Under UFT transformation the scaling relation and shape collapse became more trivial and like the LFP. This suggests that both the nonlinear rescaling of input currents by membrane properties and the way that input populations interact throughout the intricate dendritic arborization are important.
For the 51 recording groups from the first 20 min, the AAFT reshuffled data yield a median size exponent of τ = 1.74 ± 0.29, whereas the duration exponent was β = 2.0 ± 0.34 (Fig. 7D). The fitted scaling relation exponent was γf = 1.19 ± 0.06 and the predicted scaling relation exponent was γp = 1.21 ± 0.49.
Pairing the surrogates to the original Vm data and performing the Wilcoxon signed-rank test for difference of medians gives (rSDF = 0.053, p = 2 × 10−4), (rSDF = 0.091, p = 0.08), and (rSDF = 0.207, p = 3 × 10−5) for τ, β, and γf, respectively. Therefore, τ and γf are both significantly different; this is supported by the fact that only 55% of the groups meet all four standard criteria for criticality, whereas 76% of meet them for the original Vm time series. This difference between success rates is significant by Fisher's exact test (rOR = 2.67, p = 0.0363).
The failure mode for AAFT shuffled data was almost entirely in reduced goodness of fit (R2) for a power-law fit to its scaling relation, 17% fewer recording groups met the criterion R2 > 0.95 than for Vm (rOR = 4.18, p = 0.0093). When the shape collapse is examined, we see another clear, if qualitative, difference in the symmetry of any presumed scaling function (Fig. 6C). The AAFT shuffled dataset is not consistent with critical point behavior. Thus, we show that the exponent values and evidence for criticality, especially scaling and shape collapse that we inferred from Vm are not likely to come from random processes and are dependent on nonlinear temporal correlation structure.
The key feature of the UFT result is that the fitted scaling relation exponent is much lower, γf = 1.05 ± 0.049, which is significantly less than for AAFT (rSDF = 0.25, p = 1 × 10−13) and less than the LFP (rSDF = 0.228, p = 3 × 10−6). It is very close to trivial scaling but is still distinguishable from γf = 1 at a population level via the sign test (rSDF = 0.843, p = 2 × 10−10). Because the fitted scaling relation exponent and shape collapse were similar in both the UFT and LFP data, this suggests that lack of nonlinear rescaling (nonlinear filtering) may be a key feature of LFP that explains its failure to accurately reflect critical point behavior.
The UFT was universally poorer performing, 39% do pass the criticality test; but given that the scaling relation exponent is so low, this is simply random chance and significantly worse than the Vm results (rOR = 5.04, p = 3 × 10−4). The UFT phase shuffling results obtain a median size exponent of τ = 1.69 ± 0.45, while the duration exponent was β = 1.81 ± 0.49. The predicted scaling relation exponent was γp = 1.01 ± 0.72. All are significantly different from the Vm results, (rSDF = 0.183, p = 0.005), (rSDF = 0.199, p = 2 × 10−4), and (rSDF = 0.249, p = 2 × 10−13) for τ, β, and γf, respectively. These results are redundant with the AAFT, confirming that our results do not have a trivial explanation.
When the scaling relation was examined, we saw another clear, if qualitative, difference in the symmetry of any presumed scaling function (Fig. 6C). When taken together, our four standardized criteria followed by shape collapse analysis let us distinguish phase-shuffled Vm fluctuations from the original Vm fluctuations, even limiting ourselves to data that meets the four criteria. Therefore, the phase-shuffled data showed that the evidence for criticality in the original Vm fluctuations is dependent on nonlinear temporal correlations.
Excitatory and inhibitory synaptic activity are both required for Vm fluctuations to match MEA avalanches
Having learned that single-site LFP recordings cannot be used to accurately infer the statistics of population activity and knowing that low-pass filtered and inverted LFP is believed to reflect excitatory synaptic activity (Kajikawa and Schroeder, 2011; Buzsáki et al., 2012; Einevoll et al., 2013; Ness et al., 2016), this begs the question: to what extent do excitatory synaptic events contain evidence for network criticality?
Somatic Vm fluctuations are the complex result of spatially and temporally distributed excitatory and inhibitory synaptic inputs further mangled by active and passive membrane properties in dendrites and soma. There is reason to believe that these features conspire to enforce the condition that Vm faithfully represents inputs to the presynaptic network (Barrett et al., 2013; Boerlin et al., 2013; Denève and Machens, 2016), similar to how input signals relate to presynaptic populations in our model. To address the stated question, we estimated the excitatory synaptic conductance changes gexc* from the Vm recordings using a previously developed inverse modeling algorithm (Yaşar et al., 2016) and applied the avalanche analysis on the inferred gexc* time series (Fig. 6).
The inferred excitatory conductance is plausibly related to the presynaptic population, however it failed to be a reliable measure of network dynamics (Fig. 7B). We cannot know whether the failure is because excitatory current does not contain enough information or because the signal's time constant is too short. Power laws in the avalanche size and duration distributions were observed in only 12% of the 51 groups from the first 20 min of recording. Comparing with Vm, this was very different (rOR = 375, p = 6 × 10−14). Shape collapse was absent from the inferred excitatory conductance (Fig. 6C) and none passed all four criteria for criticality. From this, we conclude that inferred excitatory conductances are not a good network measure.
One of many potential reasons for this failure could be the much shorter time constant of the inferred gexc* signal compared with the Vm signal. We saw exactly that situation when examining model results: Pi(t) failed to reproduce network values as well as its smoothed version, Φi(t). Therefore, we smoothed the gexc* signal with an alpha-function chosen because it should impose a similar non-Gaussian distribution as the Vm signal. The time constant of the alpha-function was tuned to minimize the error between the autocorrelation of the smoothed gexc* signal and the original Vm signal. By doing so, we create a signal with a 1/f power spectrum that should exhibit power laws and reproduce many Vm statistical features (Fig. 6).
Reinstating the autocorrelation does not summon the return of scale-freeness. The smoothed signal did demonstrate power laws (94%) and one serendipitously met the standardized criteria for consistency with critical point behavior (Fig. 6D). However, this is chance. The average coefficient of determination for a fitted scaling relation on a log-log plot was R2 = 0.84 ± 0.14, so overall average avalanche sizes did not scale with duration as a power law. Nonetheless, this is a substantial improvement on the unsmoothed version R2 = 0.68 ± 0.17. This is a statistically significant difference (rSDF = 0.054, p = 3 × 10−4).
The smoothed inferred gexc* signal (Fig. 6A) is visually more like the original Vm (Fig. 2B) than the AAFT shuffled Vm surrogate (Fig. 6A); however, it was a worse match. This shows that signals dependent only on excitation, even ones with very similar non-Gaussian distribution and power-spectrum do not reflect the statistics of population activity. Interactions between EPSPs and IPSPs may be needed.
In conclusion, the single-site LFP, the phase-shuffled recorded Vm, and the inferred excitatory conductance gexc*, including its smoothed version, all failed to reveal the critical network dynamics. However, there are either similarities between the signals or some remaining scale-free signatures which reveal the importance of signal aspects. To faithfully represent population activity statistics, a candidate signal must: have the right non-Gaussian distribution, the right 1/f power-spectrum characteristics, and be sensitively dependent on higher-order temporal correlations such as may result from the complex interplay of excitation and inhibition within the dendritic arborization of a pyramidal neuron in the visual cortex.
Discussion
Leveraging Vm and LFP recordings with modeling and MEA data yielded two principle findings: subthreshold Vms are a useful indicator of network activity and this correspondence is inherent to critical coarse-graining. Scrutiny revealed that avalanche size and duration distribution parameters covary to maintain similar geometrical scaling across different experiments, a noteworthy observation. The following discussion emphasizes possible significance and research intersections, such as explaining disagreement with theory via subsampling effects or quasicriticality or relating neural computation to a mathematical apparatus within critical systems theory.
Although “appropriating the brain's own subsampling method” is a novel description of whole-cell recordings, it was inspired by examples. Whole-cell recordings contain information about the network (Gasparini and Magee, 2006; Mokeichev et al., 2007; Poulet and Petersen, 2008; El Boustani et al., 2009; Okun et al., 2015; Cohen-Kashi Malina et al., 2016; Hulse et al., 2017; Lee and Brecht, 2018) and stimulus (Anderson et al., 2000; Sachidhanandam et al., 2013). Usually, the focus is using neural inputs to predict outputs, not to measure population dynamics (Destexhe and Paré, 1999; Carandini and Ferster, 2000; Isaacson and Scanziani, 2011; Okun et al., 2015). Additionally, long-time or large-population statistics, like our avalanche analysis, are useful for understanding neural code (Sachdev et al., 2004; Churchland et al., 2010; Crochet et al., 2011; Graupner and Reyes, 2013; McGinley et al., 2015; Gao et al., 2016) and are robust to noise. Our finding that single Vm recordings reflect scale-free network activity is significant as recording stability in behaving animals improves (Poulet and Petersen, 2008; Kodandaramaiah et al., 2012; Lee and Brecht, 2018). We open the door to using Vm fluctuations as windows into network dynamics.
Rigorous analysis supports our experimental conclusion: subthreshold Vm fluctuations mimic neuronal avalanches and evince critical phenomena, but negative LFP deflections do not despite being purported network indicators (Bédard et al., 2006; Liu and Newsome, 2006; Kelly et al., 2010; Einevoll et al., 2013; Okun et al., 2015). We invoke network not single-neuron criticality (Gal and Marom, 2013; Taillefumier and Magnasco, 2013) because the trend between size and duration exponents agrees with MEA data. Our findings originate from spontaneous activity of ex-vivo turtle visual cortex which shares many connectivity and functional features with mammalian cortex (Ulinski, 1990; Larkum et al., 2008). Last, the results are not serendipitous noise because the Vm dataset significantly differed from a dataset of phase-shuffled and rescaled surrogates (Theiler et al., 1992).
Readers keen on critical phenomena may notice our exponents differ from the exact theoretical predictions (τ = 1.5, β = 2) (Haldeman and Beggs, 2005). Others observing this mismatch have suggested the brain operates slightly off-critical (Hahn et al., 2010; Priesemann et al., 2014; Tomen et al., 2014).
An extension of this suggestion, quasicriticality (Williams-García et al., 2014), also explains the highly stable scaling relation: biological systems blocked from precise critically may optimize properties that are maximized only for critical systems, becoming “quasicritical.” Correlation time and length are maximized only at criticality and closely related to avalanche geometrical scaling (Tang and Bak, 1988; Sethna et al., 2001). If brains optimize correlation length, then a highly stable scaling relation may result. Furthermore, including inhibition (Larremore et al., 2014) makes our otherwise critical model less consistent with criticality except that population statistics can still be inferred from input fluctuations. The stable scaling was not in the model, which lacks any plasticity mechanisms. Stable scaling may be a rare observation of self-organization principles such as quasicriticality. A contributing explanation is subsampling effects (Priesemann et al., 2009; Levina and Priesemann, 2017), but it does not explain the stable scaling relation unless quasicriticality is also invoked.
Neuronal avalanche and neural input fluctuation similarity are captured by a critical recurrent coarse-graining network
Our main modeling finding, inputs to a neuron reflect network activity best for critical branching networks, is supported by a parameter sweep and detailed analysis. Our network had no structure, but structure exists at all scales of brain networks (Song et al., 2005; Perin et al., 2011; Shimono and Beggs, 2015) and can have profound impacts on network dynamics (Litwin-Kumar and Doiron, 2012; Mastrogiuseppe and Ostojic, 2018). We derived a relationship showing that the findings may be transferrable to networks where neural inputs fluctuate about proportionality to some subsample's activity. We tune proportionality to be one, but that can also emerge from plasticity (Shew et al., 2015; Del Papa et al., 2017). Tight balance suggests a biological mechanism causing subthreshold Vm to track excitation into a presynaptic population because IPSPs can have their timing and strength “balanced” to truncate EPSPs that would otherwise last longer than spurts of presynaptic excitation (Barrett et al., 2013; Boerlin et al., 2013; Gatys et al., 2015; Denève and Machens, 2016). We use the Vm proxy Φi(t), an alpha-function convolved with a point process, Pi(t). Φi(t) is more like Vm than Pi(t) and reproduces our experimental findings. Last, we investigate quasicriticality by including inhibition but tuning the maximum eigenvalue to what would be the critical point without inhibition.
Our model provides insights on network subsampling and renormalization group. Usually, subsampling means selecting neurons at random or modeling an MEA with an arbitrary grid (Priesemann et al., 2009). Our “subsample” is the presynaptic population represented by summing weighted inputs from active neurons. This is the first analysis intersecting network convergence (i.e., postsynaptic soma).
Subsampling distorts avalanche size and duration, likely creating differences between experimental results and theoretical predictions (Priesemann et al., 2009; Ribeiro et al., 2014; Levina and Priesemann, 2017; Wilting and Priesemann, 2018). Subsampling may explain disagreement between avalanche analysis on simulated network activity, F(t), the Vm proxy Φi(t), and the single-neuron firing rate Pi(t). However, Vm and MEA results are off theory but match each other. Either their subsampling errors are alike enough to produce similar distortions or subsampling cooccurs with quasicriticality (Priesemann et al., 2014; Williams-García et al., 2014).
Intriguingly, the restricted Boltzmann machine (RBM) (Aggarwal, 2018) (a related model) was exactly mapped to a “renormalization group” (RG) operator (Mehta and Schwab, 2014; Koch-Janusz and Ringel, 2018). RG is a mathematical apparatus relating bulk properties to minute interactions (Maris and Kadanoff, 1978; Nishimori and Ortiz, 2011; Sfondrini, 2012). It characterizes critical points of phase transitions (Stanley, 1999; Sethna et al., 2001) and helps to derive neuronal avalanche analysis predictions (Sethna et al., 2001; Le Doussal and Wiese, 2009; Papanikolaou et al., 2011; Cowan et al., 2013). RG operators coarse-grain and then rescale, like resizing a digital image. Crucially, iterating an appropriate operator on a critical system produces statistically identical “copies,” but on noncritical systems, the iterations diverge. Our model averages (coarse-grains) presynaptic pools to get an instantaneous firing probability for each neuron; then, a logical operation (rescaling) sets the spiking states for the next iteration, demonstrating an RG-like operation that reproduces our experimental findings. Denève and Machens (2016) proposed a similar relationship between real Vm and presynaptic pools. The finding that a similar neural operation emerges in RBMs underscores the relevance of RG and the extension of our findings to structured or nonbranching networks. The importance is that a recurrent coarse-graining network may be like a scale-free ouroboros, displaying widespread scale-freeness if any component is critical or briefly driven by critical or scale-free inputs (Mehta and Schwab, 2014; Schwab et al., 2014; Aoki and Kobayashi, 2017; Koch-Janusz and Ringel, 2018).
Significantly, associating neuronal processing with critical branching may induce an organizing principle, the “information bottleneck principle.” This balances dimensionality reduction (compression) against information loss (Tishby and Zaslavsky, 2015) and is reminiscent of efficient coding (Friston, 2010; Denève and Machens, 2016) and origins of tuning curves (Wilson et al., 2016; Heeger, 2017). Koch-Janusz and Ringel (2018) trained their network by maximizing mutual information between many inputs and few outputs. This produced nodes with receptive fields matching popular RG operators. They derived correct power laws by iterating the network. Applications of RG to neural computation are known: image processing (Gidas, 1989; Mehta and Schwab, 2014; Saremi and Sejnowski, 2016), brain and behavior (Freeman and Cao, 2008), emergent consciousness (Werner, 2012; Fingelkurts et al., 2013; Laughlin, 2014), and hierarchical modular networks (Lee et al., 1986; Willcox, 1991) important for criticality (Moretti and Muñoz, 2013). Furthermore, our model's RG-like features are crucial to reproducing experimental results. It follows that elegant RG operators as in the RBM might also capture biological neuronal processing, fulfilling the demand for beautiful neuroscience models (Roberts, 2018) while offering insights into organizing principles and scale-freeness.
Conclusion
We have established that subthreshold fluctuations of Vm in single neurons agree with neuronal avalanche statistics and with critical branching, but fluctuations in other single-electrode signals do not. Computational modeling showed that accurate inference requires critical-branching-like connectivity. Fluctuation size scales with duration more self-consistently in experimental than model results, hinting at self-organization. These findings are consistent with a nascent reduction of neural computation to coarse-graining operations that may explain the prevalence of critical-like behavior during spontaneous neural activity. Fully articulating the implications requires more investigation, but we have substantially extended the evidence for critical phenomena in neural systems while rigorously demonstrating that subthreshold Vm fluctuations of single neurons contain useful information about dynamical network properties.
Footnotes
This work was supported by the Whitehall Foundation (Grant 20121221 to R.W.) and the National Science Foundation (Collaborative Research in Computational Neuroscience Grant 1308159 to R.W.). We thank Woodrow Shew at the University of Arkansas for helping with the comparison with MEA data.
The authors declare no competing financial interests.
References
- Aggarwal CC. (2018) Neural networks and deep learning: a textbook. Cham, Switzertland: Springer International Publishing. [Google Scholar]
- Anderson J, Lampl I, Reichova I, Carandini M, Ferster D (2000) Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex. Nat Neurosci 3:617–621. 10.1038/75797 [DOI] [PubMed] [Google Scholar]
- Aoki K-I, Kobayashi T (2017) Restricted Boltzmann machines for the long range Ising models. Mod Phys Lett B 30:1650401. [Google Scholar]
- Arviv O, Goldstein A, Shriki O (2015) Near-critical dynamics in stimulus-evoked activity of the human brain and its relation to spontaneous resting-state activity. J Neurosci 35:13927–13942. 10.1523/JNEUROSCI.0477-15.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bak P, Tang C, Wiesenfeld K (1987) Self-organized criticality: an explanation of the 1/f noise. Phys Rev Lett 59:381–384. 10.1103/PhysRevLett.59.381 [DOI] [PubMed] [Google Scholar]
- Barrett D, Denève S, Machens C (2013) Firing rate predictions in optimal balanced networks. In: Advances in neural information processing (NIPS) (Pereira F, Burges CJC, Bottou L, Weinberger KQ, eds), pp 1–9. Red Hook, NY: Curran Associates. [Google Scholar]
- Bédard C, Kröger H, Destexhe A (2006) Does the 1/f frequency scaling of brain signals reflect self-organized critical states? Phys Rev Lett 97:118102. 10.1103/PhysRevLett.97.118102 [DOI] [PubMed] [Google Scholar]
- Beggs JM. (2008) The criticality hypothesis: how local cortical networks might optimize information processing. Philos Trans R Soc A Math Phys Eng Sci 366:329–343. 10.1098/rsta.2007.2092 [DOI] [PubMed] [Google Scholar]
- Beggs JM, Plenz D (2003) Neuronal avalanches in neocortical circuits. J Neurosci 23:11167–11177. 10.1523/JNEUROSCI.23-35-11167.2003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beggs JM, Timme N (2012) Being critical of criticality in the brain. Front Physiol 3:163. 10.3389/fphys.2012.00163 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bender R, Lange S (2001) Adjusting for multiple testing: when and how? J Clin Epidemiol 54:343–349. 10.1016/S0895-4356(00)00314-0 [DOI] [PubMed] [Google Scholar]
- Boerlin M, Machens CK, Denève S (2013) Predictive coding of dynamical variables in balanced spiking networks. PLoS Comput Biol 9:e1003258. 10.1371/journal.pcbi.1003258 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bozdogan H. (1987) Model selection and Akaike's information criterion (AIC): the general theory and its analytical extensions. Psychometrika 52:345–370. 10.1007/BF02294361 [DOI] [Google Scholar]
- Brunel N, Hakim V, Richardson MJ (2014) Single neuron dynamics and computation. Curr Opin Neurobiol 25:149–155. 10.1016/j.conb.2014.01.005 [DOI] [PubMed] [Google Scholar]
- Buzsáki G, Anastassiou CA, Koch C (2012) The origin of extracellular fields and currents: EEG, ECoG, LFP and spikes. Nat Rev Neurosci 13:407–420. 10.1038/nrn3241 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carandini M, Ferster D (2000) Membrane potential and firing rate in cat primary visual cortex. J Neurosci 20:470–484. 10.1523/JNEUROSCI.20-01-00470.2000 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chialvo DR. (2010) Emergent complex neural dynamics. Nat Phys 6:744–750. 10.1038/nphys1803 [DOI] [Google Scholar]
- Churchland MM, Yu BM, Cunningham JP, Sugrue LP, Cohen MR, Corrado GS, Newsome WT, Clark AM, Hosseini P, Scott BB, Bradley DC, Smith MA, Kohn A, Movshon JA, Armstrong KM, Moore T, Chang SW, Snyder LH, Lisberger SG, Priebe NJ, et al. (2010) Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nat Neurosci 13:369–378. 10.1038/nn.2501 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clauset A, Shalizi CR, Newman MEJ (2009) Power-law distributions in empirical data. SIAM Rev 51:661–703. 10.1137/070710111 [DOI] [Google Scholar]
- Clawson WP, Wright NC, Wessel R, Shew WL (2017) Adaptation towards scale-free dynamics improves cortical stimulus discrimination at the cost of reduced detection. PLoS Comput Biol 13:e1005574. 10.1371/journal.pcbi.1005574 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cocchi L, Gollo LL, Zalesky A, Breakspear M (2017) Criticality in the brain: a synthesis of neurobiology, models and cognition. Prog Neurobiol 158:132–152. 10.1016/j.pneurobio.2017.07.002 [DOI] [PubMed] [Google Scholar]
- Cohen-Kashi Malina K, Mohar B, Rappaport AN, Lampl I (2016) Local and thalamic origins of correlated ongoing and sensory-evoked cortical activities. Nat Commun 7:12740. 10.1038/ncomms12740 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cowan JD, Neuman J, Kiewiet B, Van Drongelen W (2013) Self-organized criticality in a network of interacting neurons. J Stat Mech Theory Exp 2013:P04030 10.1088/1742-5468/2013/04/P04030 [DOI] [Google Scholar]
- Crochet S, Poulet JF, Kremer Y, Petersen CC (2011) Synaptic mechanisms underlying sparse coding of active touch. Neuron 69:1160–1175. 10.1016/j.neuron.2011.02.022 [DOI] [PubMed] [Google Scholar]
- Crockett T, Wright N, Thornquist S, Ariel M, Wessel R (2015) Turtle dorsal cortex pyramidal neurons comprise two distinct cell types with indistinguishable visual responses. PLoS One 10:e0144012. 10.1371/journal.pone.0144012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Del Papa B, Priesemann V, Triesch J (2017) Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network. PLoS One 12:e0178683. 10.1371/journal.pone.0178683 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deluca A, Corral Á (2013) Fitting and goodness-of-fit test of non-truncated and truncated power-law distributions. Acta Geophys 61:1351–1394. 10.2478/s11600-013-0154-9 [DOI] [Google Scholar]
- Denève S, Machens CK (2016) Efficient codes and balanced networks. Nat Neurosci 19:375–382. 10.1038/nn.4243 [DOI] [PubMed] [Google Scholar]
- Destexhe A, Paré D (1999) Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo. J Neurophysiol 81:1531–1547. 10.1152/jn.1999.81.4.1531 [DOI] [PubMed] [Google Scholar]
- Destexhe A, Rudolph M, Paré D (2003) The high-conductance state of neocortical neurons in vivo. Nat Rev Neurosci 4:739–751. 10.1038/nrn1198 [DOI] [PubMed] [Google Scholar]
- Einevoll GT, Kayser C, Logothetis NK, Panzeri S (2013) Modelling and analysis of local field potentials for studying the function of cortical circuits. Nat Rev Neurosci 14:770–785. 10.1038/nrn3599 [DOI] [PubMed] [Google Scholar]
- El Boustani S, Marre O, Béhuret S, Baudot P, Yger P, Bal T, Destexhe A, Frégnac Y (2009) Network-state modulation of power-law frequency-scaling in visual cortical neurons. PLoS Comput Biol 5:e1000519. 10.1371/journal.pcbi.1000519 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fingelkurts AA, Fingelkurts AA, Neves CFH (2013) Consciousness as a phenomenon in the operational architectonics of brain organization: criticality and self-organization considerations. Chaos Solitons and Fractals 55:13–31. 10.1016/j.chaos.2013.02.007 [DOI] [Google Scholar]
- Freeman WJ, Cao TY (2008) Proposed renormalization group analysis of nonlinear brain dynamics at criticality. In: Advances in cognitive neurodynamics: proceedings of the International Conference on Cognitive Neurodynamics, 2007 (Wang R, Shen E, Gu F, eds), pp 145–156. Netherlands: Springer. [Google Scholar]
- Friedman N, Ito S, Brinkman BA, Shimono M, DeVille RE, Dahmen KA, Beggs JM, Butler TC (2012) Universal critical dynamics in high resolution neuronal avalanche data. Phys Rev Lett 108:208102. 10.1103/PhysRevLett.108.208102 [DOI] [PubMed] [Google Scholar]
- Friston K. (2010) The free-energy principle: a unified brain theory? Nat Rev Neurosci 11:127–138. 10.1038/nrn2787 [DOI] [PubMed] [Google Scholar]
- Gal A, Marom S (2013) Self-organized criticality in single-neuron excitability. Phys Rev E Stat Nonlin Soft Matter Phys 88:062717. 10.1103/PhysRevE.88.062717 [DOI] [PubMed] [Google Scholar]
- Gao L, Kostlan K, Wang Y, Wang X (2016) Distinct subthreshold mechanisms underlying rate-coding principles in primate auditory cortex. Neuron 91:905–919. 10.1016/j.neuron.2016.07.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gasparini S, Magee JC (2006) State-dependent dendritic computation in hippocampal CA1 pyramidal neurons. J Neurosci 26:2088–2100. 10.1523/JNEUROSCI.4428-05.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gatys LA, Ecker AS, Tchumatchenko T, Bethge M (2015) Synaptic unreliability facilitates information transmission in balanced cortical populations. Phys Rev E Stat Nonlin Soft Matter Phys 91:062707. 10.1103/PhysRevE.91.062707 [DOI] [PubMed] [Google Scholar]
- Gautam SH, Hoang TT, McClanahan K, Grady SK, Shew WL (2015) Maximizing sensory dynamic range by tuning the cortical state to criticality. PLoS Comput Biol 11:e1004576. 10.1371/journal.pcbi.1004576 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gidas B. (1989) A renormalization group approach to image processing problems. IEEE Trans Pattern Anal Mach Intell 11:164–180. 10.1109/34.16712 [DOI] [Google Scholar]
- Graupner M, Reyes AD (2013) Synaptic input correlations leading to membrane potential decorrelation of spontaneous activity in cortex. J Neurosci 33:15075–15085. 10.1523/JNEUROSCI.0347-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hahn G, Petermann T, Havenith MN, Yu S, Singer W, Plenz D, Nikolic D (2010) Neuronal avalanches in spontaneous activity in vivo. J Neurophysiol 104:3312–3322. 10.1152/jn.00953.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haldeman C, Beggs JM (2005) Critical branching captures activity in living neural networks and maximizes the number of metastable states. Phys Rev Lett 94:58101 10.1103/PhysRevLett.94.058101 [DOI] [PubMed] [Google Scholar]
- Hammond F, Malec JF, Nick T, Buschbacher RM (2015) Part II: Statistics, Introduction. In: Handbook for clinical research: design, statistics, and implementation, pp 77–78. New York: Demos Medical. [Google Scholar]
- Hartley C, Taylor TJ, Kiss IZ, Farmer SF, Berthouze L (2014) Identification of criticality in neuronal avalanches. II. A theoretical and empirical investigation of the driven case. J Math Neurosci 4:9. 10.1186/2190-8567-4-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heeger DJ. (2017) Theory of cortical function. Proc Natl Acad Sci U S A 114:1773–1782. 10.1073/pnas.1619788114 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hellwig B. (2000) A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex. Biol Cybern 82:111–121. 10.1007/PL00007964 [DOI] [PubMed] [Google Scholar]
- Helmstaedter M. (2013) Cellular-resolution connectomics: challenges of dense neural circuit reconstruction. Nat Methods 10:501–507. 10.1038/nmeth.2476 [DOI] [PubMed] [Google Scholar]
- Hesse J, Gross T (2014) Self-organized criticality as a fundamental property of neural systems. Front Syst Neurosci 8:166. 10.3389/fnsys.2014.00166 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hulse BK, Lubenov EV, Siapas AG (2017) Brain state dependence of hippocampal subthreshold activity in awake mice. Cell Rep 18:136–147. 10.1016/j.celrep.2016.11.084 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Isaacson JS, Scanziani M (2011) How inhibition shapes cortical activity. Neuron 72:231–243. 10.1016/j.neuron.2011.09.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jeżewski W. (2004) Scaling in weighted networks and complex systems. Physica A: Statistical Mechanics and its Applications 337:336–356. 10.1016/j.physa.2004.01.028 [DOI] [Google Scholar]
- Kajikawa Y, Schroeder CE (2011) How local is the local field potential? Neuron 72:847–858. 10.1016/j.neuron.2011.09.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Karimipanah Y, Ma Z, Miller JK, Yuste R, Wessel R (2017a) Neocortical activity is stimulus- and scale-invariant. PLoS One 12:e0177396. 10.1371/journal.pone.0177396 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Karimipanah Y, Ma Z, Wessel R (2017b) Criticality predicts maximum irregularity in recurrent networks of excitatory nodes. PLoS One 12:e0182501. 10.1371/journal.pone.0182501 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kello CT. (2013) Critical branching neural networks. Psychol Rev 120:230–254. 10.1037/a0030970 [DOI] [PubMed] [Google Scholar]
- Kelly RC, Smith MA, Kass RE, Lee TS (2010) Local field potentials indicate network state and account for neuronal response variability. J Comput Neurosci 29:567–579. 10.1007/s10827-009-0208-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kerby DS. (2014) The simple difference formula: an approach to teaching nonparametric correlation. Compr Psychol 3:11.IT.3.1. [Google Scholar]
- Kinouchi O, Copelli M (2006) Optimal dynamical range of excitable networks at criticality. Nat Phys 2:348–351. 10.1038/nphys289 [DOI] [Google Scholar]
- Klaus A, Yu S, Plenz D (2011) Statistical analyses support power law distributions found in neuronal avalanches. PLoS One 6:e19779. 10.1371/journal.pone.0019779 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koch-Janusz M, Ringel Z (2018) Mutual information, neural networks and the renormalization group. Nat Phys 14:578–582. 10.1038/s41567-018-0081-4 [DOI] [Google Scholar]
- Kodandaramaiah SB, Franzesi GT, Chow BY, Boyden ES, Forest CR (2012) Automated whole-cell patch-clamp electrophysiology of neurons in vivo. Nat Methods 9:585–587. 10.1038/nmeth.1993 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Larkum ME, Watanabe S, Lasser-Ross N, Rhodes P, Ross WN (2008) Dendritic properties of turtle pyramidal neurons. J Neurophysiol 99:683–694. 10.1152/jn.01076.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Larremore DB, Shew WL, Ott E, Restrepo JG (2011a) Effects of network topology, transmission delays, and refractoriness on the response of coupled excitable systems to a stochastic stimulus. Chaos 21:025117. 10.1063/1.3600760 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Larremore DB, Shew WL, Restrepo JG (2011b) Predicting criticality and dynamic range in complex networks: effects of topology. Phys Rev Lett 106:058101. 10.1103/PhysRevLett.106.058101 [DOI] [PubMed] [Google Scholar]
- Larremore DB, Carpenter MY, Ott E, Restrepo JG (2012) Statistical properties of avalanches in networks. Phys Rev E Stat Nonlin Soft Matter Phys 85:066131. 10.1103/PhysRevE.85.066131 [DOI] [PubMed] [Google Scholar]
- Larremore DB, Shew WL, Ott E, Sorrentino F, Restrepo JG (2014) Inhibition causes ceaseless dynamics in networks of excitable nodes. Phys Rev Lett 112:138103. 10.1103/PhysRevLett.112.138103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Laughlin RB. (2014) Physics, emergence, and the connectome. Neuron 83:1253–1255. 10.1016/j.neuron.2014.08.006 [DOI] [PubMed] [Google Scholar]
- Le Doussal P, Wiese KJ (2009) Size distributions of shocks and static avalanches from the functional renormalization group. Phys Rev E Stat Nonlin Soft Matter Phys 79:051106. 10.1103/PhysRevE.79.051106 [DOI] [PubMed] [Google Scholar]
- Lee AK, Brecht M (2018) Elucidating neuronal mechanisms using intracellular recordings during behavior. Trends Neurosci 41:385–403. 10.1016/j.tins.2018.03.014 [DOI] [PubMed] [Google Scholar]
- Lee WC, Bonin V, Reed M, Graham BJ, Hood G, Glattfelder K, Reid RC (2016) Anatomy and function of an excitatory network in the visual cortex. Nature 532:370–374. 10.1038/nature17192 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee YC, Doolen G, Chen HH, Sun GZ, Maxwell T, Lee HY, Giles CL (1986) Machine learning using a higher order correlation network. Phys D 22:276–306. 10.1016/0167-2789(86)90300-6 [DOI] [Google Scholar]
- Levina A, Priesemann V (2017) Subsampling scaling. Nat Commun 8:15140. 10.1038/ncomms15140 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Levina A, Herrmann JM, Denker M (2007) Critical branching processes in neural networks. PAMM 7:1030701–1030702. 10.1002/pamm.200700029 [DOI] [Google Scholar]
- Litwin-Kumar A, Doiron B (2012) Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci 15:1498–1505. 10.1038/nn.3220 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu J, Newsome WT (2006) Local field potential in cortical area MT: stimulus tuning and behavioral correlations. J Neurosci 26:7779–7790. 10.1523/JNEUROSCI.5052-05.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- London M, Häusser M (2005) Dendritic computation. Annu Rev Neurosci 28:503–532. 10.1146/annurev.neuro.28.061604.135703 [DOI] [PubMed] [Google Scholar]
- Magee JC. (2000) Dendritic integration of excitatory synaptic input. Nat Rev Neurosci 1:181–190. 10.1038/35044552 [DOI] [PubMed] [Google Scholar]
- Mao BQ, Hamzei-Sichani F, Aronov D, Froemke RC, Yuste R (2001) Dynamics of spontaneous activity in neocortical slices. Neuron 32:883–898. 10.1016/S0896-6273(01)00518-9 [DOI] [PubMed] [Google Scholar]
- Maris HJ, Kadanoff LP (1978) Teaching the renormalization group. Am J Phys 46:652–657. 10.1119/1.11224 [DOI] [Google Scholar]
- Marshall N, Timme NM, Bennett N, Ripp M, Lautzenhiser E, Beggs JM (2016) Analysis of power laws, shape collapses, and neural complexity: new techniques and MATLAB support via the NCC toolbox. Front Physiol 7:250. 10.3389/fphys.2016.00250 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mastrogiuseppe F, Ostojic S (2018) Linking connectivity, dynamics, and computations in low-rank recurrent neural networks. Neuron 99:609–623.e29. 10.1016/j.neuron.2018.07.003 [DOI] [PubMed] [Google Scholar]
- McGinley MJ, David SV, McCormick DA (2015) Cortical membrane potential signature of optimal states for sensory signal detection. Neuron 87:179–192. 10.1016/j.neuron.2015.05.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mehta P, Schwab DJ (2014) An exact mapping between the variational renormalization group and deep learning. Available at https://arxiv.org/abs/1410.3831.
- Meinecke DL, Peters A (1987) GABA immunoreactive neurons in rat visual cortex. J Comp Neurol 261:388–404. 10.1002/cne.902610305 [DOI] [PubMed] [Google Scholar]
- Millman D, Mihalas S, Kirkwood A, Niebur E (2010) Self-organized criticality occurs in non-conservative neuronal networks during “up” states. Nat Phys 6:801–805. 10.1038/nphys1757 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mokeichev A, Okun M, Barak O, Katz Y, Ben-Shahar O, Lampl I (2007) Stochastic emergence of repeating cortical motifs in spontaneous membrane potential fluctuations in vivo. Neuron 53:413–425. 10.1016/j.neuron.2007.01.017 [DOI] [PubMed] [Google Scholar]
- Moore JJ, Ravassard PM, Ho D, Acharya L, Kees AL, Vuong C, Mehta MR (2017) Dynamics of cortical dendritic membrane potential and spikes in freely behaving rats. Science 355:eaaj1497. 10.1126/science.aaj1497 [DOI] [PubMed] [Google Scholar]
- Moretti P, Muñoz MA (2013) Griffiths phases and the stretching of criticality in brain networks. Nat Commun 4:2521. 10.1038/ncomms3521 [DOI] [PubMed] [Google Scholar]
- Mulligan KA, Ulinski PS (1990) Organization of geniculocortical projections in turtles: isoazimuth lamellae in the visual cortex. J Comp Neurol 296:531–547. 10.1002/cne.902960403 [DOI] [PubMed] [Google Scholar]
- Ness TV, Remme MW, Einevoll GT (2016) Active subthreshold dendritic conductances shape the local field potential. J Physiol 594:3809–3825. 10.1113/JP272022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nishimori H, Ortiz G (2011) Elements of phase transitions and critical phenomena. New York: OUP. [Google Scholar]
- Nunez PL, Srinivasan R, Ingber L (2013) Theoretical and experimental electrophysiology in human neocortex: multiscale dynamic correlates of conscious experience. In: Multiscale analysis and nonlinear dynamics (Pesenson M, (Meyer) Z, ed), pp 147–177. Weinheim, Germany: Wiley. [Google Scholar]
- Okun M, Steinmetz N, Cossell L, Iacaruso MF, Ko H, Barthó P, Moore T, Hofer SB, Mrsic-Flogel TD, Carandini M, Harris KD (2015) Diverse coupling of neurons to populations in sensory cortex. Nature 521:511–515. 10.1038/nature14273 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Papanikolaou S, Bohn F, Sommer RL, Durin G, Zapperi S, Sethna JP (2011) Universality beyond power laws and the average avalanche shape. Nat Phys 7:316–320. 10.1038/nphys1884 [DOI] [Google Scholar]
- Perin R, Berger TK, Markram H (2011) A synaptic organizing principle for cortical neuronal groups. Proc Natl Acad Sci U S A 108:5419–5424. 10.1073/pnas.1016051108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Petersen CCH. (2017) Whole-cell recording of neuronal membrane potential during behavior. Neuron 95:1266–1281. 10.1016/j.neuron.2017.06.049 [DOI] [PubMed] [Google Scholar]
- Pettersen KH, Lindén H, Tetzlaff T, Einevoll GT (2014) Power laws from linear neuronal cable theory: power spectral densities of the soma potential, soma membrane current and single-neuron contribution to the EEG. PLoS Comput Biol 10:e1003928. 10.1371/journal.pcbi.1003928 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poil SS, van Ooyen A, Linkenkaer-Hansen K (2008) Avalanche dynamics of human brain oscillations: relation to critical branching processes and temporal correlations. Hum Brain Mapp 29:770–777. 10.1002/hbm.20590 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poil SS, Hardstone R, Mansvelder HD, Linkenkaer-Hansen K (2012) Critical-state dynamics of avalanches and oscillations jointly emerge from balanced Excitation/Inhibition in neuronal networks. J Neurosci 32:9817–9823. 10.1523/JNEUROSCI.5990-11.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Porta LD, Copelli M (2018) Modeling neuronal avalanches and long-range temporal correlations at the emergence of collective oscillations: continuously varying exponents mimic M/EEG results. Available at https://www.biorxiv.org/content/10.1101/423921v1. [DOI] [PMC free article] [PubMed]
- Poulet JF, Petersen CC (2008) Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice. Nature 454:881–885. 10.1038/nature07150 [DOI] [PubMed] [Google Scholar]
- Priesemann V, Wibral M, Valderrama M, Pröpper R, Le Van Quyen M, Geisel T, Triesch J, Nikolić D, Munk MH (2014) Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Front Syst Neurosci 8:108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Priesemann V, Shriki O (2018) Can a time varying external drive give rise to apparent criticality in neural systems? PLoS Comput Biol 14:e1006081. 10.1371/journal.pcbi.1006081 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Priesemann V, Munk MH, Wibral M (2009) Subsampling effects in neuronal avalanche distributions recorded in vivo. BMC Neurosci 10:40. 10.1186/1471-2202-10-40 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pruessner G. (2012) Self-organized criticality theory models and characterisation. Cambridge: Cambridge University. [Google Scholar]
- Rapp PE, Albano AM, Zimmerman ID, Jiménez-Montaño MA (1994) Phase-randomized surrogates can produce spurious identifications of non-random structure. Phys Lett A 192:27–33. 10.1016/0375-9601(94)91010-3 [DOI] [Google Scholar]
- Ribeiro TL, Copelli M, Caixeta F, Belchior H, Chialvo DR, Nicolelis MA, Ribeiro S (2010) Spike avalanches exhibit universal dynamics across the sleep-wake cycle. PLoS One 5:e14129. 10.1371/journal.pone.0014129 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ribeiro TL, Ribeiro S, Belchior H, Caixeta F, Copelli M (2014) Undersampled critical branching processes on small-world and random networks fail to reproduce the statistics of spike avalanches. PLoS One 9:e94992. 10.1371/journal.pone.0094992 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roberts S. (2018) Mathematician Carina Curto thinks like a physicist to solve neuroscience problems. Quanta Mag. Available at https://www.quantamagazine.org/mathematician-carina-curto-thinks-like-a-physicist-to-solve-neuroscience-problems-20180619/.
- Sachdev RN, Ebner FF, Wilson CJ (2004) Effect of subthreshold up and down states on the whisker-evoked response in somatosensory cortex. J Neurophysiol 92:3511–3521. 10.1152/jn.00347.2004 [DOI] [PubMed] [Google Scholar]
- Sachidhanandam S, Sreenivasan V, Kyriakatos A, Kremer Y, Petersen CC (2013) Membrane potential correlates of sensory perception in mouse barrel cortex. Nat Neurosci 16:1671–1677. 10.1038/nn.3532 [DOI] [PubMed] [Google Scholar]
- Saha D, Morton D, Ariel M, Wessel R (2011) Response properties of visual neurons in the turtle nucleus isthmi. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 197:153–165. 10.1007/s00359-010-0596-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saremi S, Sejnowski TJ (2016) Correlated percolation, fractal structures, and scale-invariant distribution of clusters in natural images. IEEE Trans Pattern Anal Mach Intell 38:1016–1020. 10.1109/TPAMI.2015.2481402 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scarpetta S, de Candia A (2013) Neural avalanches at the critical point between replay and non-replay of spatiotemporal patterns. PLoS One 8:e64162. 10.1371/journal.pone.0064162 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scarpetta S, Apicella I, Minati L, de Candia A (2018) Hysteresis, neural avalanches, and critical behavior near a first-order transition of a spiking neural network. Phys Rev E 97:062305. 10.1103/PhysRevE.97.062305 [DOI] [PubMed] [Google Scholar]
- Schwab DJ, Nemenman I, Mehta P (2014) Zipf's law and criticality in multivariate data without fine-tuning. Phys Rev Lett 113:68102. 10.1103/PhysRevLett.113.068102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Senseman DM, Robbins KA (1999) Modal behavior of cortical neural networks during visual processing. J Neurosci 19:RC3. 10.1523/JNEUROSCI.19-10-j0004.1999 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Senseman DM, Robbins KA (2002) High-speed VSD imaging of visually evoked cortical waves: decomposition into intra- and intercortical wave motions. J Neurophysiol 87:1499–1514. 10.1152/jn.00475.2001 [DOI] [PubMed] [Google Scholar]
- Sethna JP, Dahmen KA, Myers CR (2001) Crackling noise. Nature 410:242–250. 10.1038/35065675 [DOI] [PubMed] [Google Scholar]
- Sfondrini A. (2012) Introduction to universality and renormalization group techniques. Available at https://arxiv.org/abs/1210.2262.
- Shaukat A, Thivierge JP (2016) Statistical evaluation of waveform collapse reveals scale-free properties of neuronal avalanches. Front Comput Neurosci 10:29. 10.3389/fncom.2016.00029 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shew WL, Plenz D (2013) The functional benefits of criticality in the cortex. Neuroscientist 19:88–100. 10.1177/1073858412445487 [DOI] [PubMed] [Google Scholar]
- Shew WL, Yang H, Yu S, Roy R, Plenz D (2011) Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. J Neurosci 31:55–63. 10.1523/JNEUROSCI.4637-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shew WL, Clawson WP, Pobst J, Karimipanah Y, Wright NC, Wessel R (2015) Adaptation to sensory input tunes visual cortex to criticality. Nat Phys 11:659–663. 10.1038/nphys3370 [DOI] [Google Scholar]
- Shimono M, Beggs JM (2015) Functional clusters, hubs, and communities in the cortical microconnectome. Cereb Cortex 25:3743–3757. 10.1093/cercor/bhu252 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shoham S, O'Connor DH, Segev R (2006) How silent is the brain: is there a “dark matter” problem in neuroscience? J Comp Physiol A Neuroethol Sens Neural Behav Physiol 192:777–784. 10.1007/s00359-006-0117-6 [DOI] [PubMed] [Google Scholar]
- Shriki O, Yellin D (2016) Optimal information representation and criticality in an adaptive sensory recurrent neuronal network. PLoS Comput Biol 12:e1004698. 10.1371/journal.pcbi.1004698 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shriki O, Alstott J, Carver F, Holroyd T, Henson RN, Smith ML, Coppola R, Bullmore E, Plenz D (2013) Neuronal avalanches in the resting MEG of the human brain. J Neurosci 33:7079–7090. 10.1523/JNEUROSCI.4286-12.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Song S, Reigl M, Nelson S, Chklovskii DB, Sjo PJ (2005) Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol 3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stanley HE. (1999) Scaling, universality, and renormalization: three pillars of modern critical phenomena. Rev Mod Phys 71:S358–S366. 10.1103/RevModPhys.71.S358 [DOI] [Google Scholar]
- Stepanyants A, Hof PR, Chklovskii DB (2002) Geometry and structural plasticity of synaptic connectivity. Neuron 34:275–288. 10.1016/S0896-6273(02)00652-9 [DOI] [PubMed] [Google Scholar]
- Stuart GJ, Spruston N (2015) Dendritic integration: 60 years of progress. Nat Neurosci 18:1713–1721. 10.1038/nn.4157 [DOI] [PubMed] [Google Scholar]
- Taillefumier T, Magnasco MO (2013) A phase transition in the first passage of a brownian process through a fluctuating boundary with implications for neural coding. Proc Natl Acad Sci U S A 110:E1438–E1443. 10.1073/pnas.1212479110 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tang C, Bak P (1988) Critical exponents and scaling relations for self-organized critical phenomena. Phys Rev Lett 60:2347–2350. 10.1103/PhysRevLett.60.2347 [DOI] [PubMed] [Google Scholar]
- Taylor TJ, Hartley C, Simon PL, Kiss IZ, Berthouze L (2013) Identification of criticality in neuronal avalanches. I. A theoretical investigation of the non-driven case. J Math Neurosci 3:5. 10.1186/2190-8567-3-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Theiler J, Eubank S, Longtin A, Galdrikian B, Doyne Farmer J (1992) Testing for nonlinearity in time series: the method of surrogate data. Phys D Nonlin Phenom 58:77–94. 10.1016/0167-2789(92)90102-S [DOI] [Google Scholar]
- Timme NM, Marshall NJ, Bennett N, Ripp M, Lautzenhiser E, Beggs JM (2016) Criticality maximizes complexity in neural tissue. Front Physiol 7:425. 10.3389/fphys.2016.00425 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tishby N, Zaslavsky N (2015) Deep learning and the information bottleneck principle. In: 2015 IEEE Information Theory Workshop (ITW), pp 1–5. Red Hook, NY: Curran Associates. [Google Scholar]
- Tomen N, Rotermund D, Ernst U (2014) Marginally subcritical dynamics explain enhanced stimulus discriminability under attention. Front Syst Neurosci 8:151. 10.3389/fnsys.2014.00151 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Touboul J, Destexhe A (2010) Can power-law scaling and neuronal avalanches arise from stochastic dynamics? PLoS One 5:e8982. 10.1371/journal.pone.0008982 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Touboul J, Destexhe A (2017) Power-law statistics and universal scaling in the absence of criticality. Phys Rev E 95:012413. 10.1103/PhysRevE.95.012413 [DOI] [PubMed] [Google Scholar]
- Tsodyks MV, Markram H (1997) The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc Natl Acad Sci U S A 94:719–723. 10.1073/pnas.94.2.719 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ulinski PS. (1990) The cerebral cortex of reptiles. In: Comparative structure and evolution of cerebral cortex, part I (Jones EG, Peters A, eds), pp 139–215. New York: Plenum. [Google Scholar]
- Werner G. (2012) From brain states to mental phenomena via phase space transitions and renormalization group transformation: proposal of a theory. Cogn Neurodyn 6:199–202. 10.1007/s11571-011-9187-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wertz A, Trenholm S, Yonehara K, Hillier D, Raics Z, Leinweber M, Szalay G, Ghanem A, Keller G, Rózsa B, Conzelmann KK, Roska B (2015) Single-cell-initiated monosynaptic tracing reveals layer-specific cortical network modules. Science 349:70–74. 10.1126/science.aab1687 [DOI] [PubMed] [Google Scholar]
- Willcox CR. (1991) Understanding hierarchical neural network behaviour: a renormalization group approach. J Phys A Math Theor 24:2655. [Google Scholar]
- Williams-García RV, Moore M, Beggs JM, Ortiz G (2014) Quasicritical brain dynamics on a nonequilibrium Widom line. Phys Rev E Stat Nonlin Soft Matter Phys 90:062714. 10.1103/PhysRevE.90.062714 [DOI] [PubMed] [Google Scholar]
- Wilson DE, Whitney DE, Scholl B, Fitzpatrick D (2016) Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex. Nat Neurosci 19:1003–1009. 10.1038/nn.4323 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilting J, Priesemann V (2018) Inferring collective dynamical states from widely unobserved systems. Nat Commun 9:2325. 10.1038/s41467-018-04725-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wolfe J, Houweling AR, Brecht M (2010) Sparse and powerful cortical spikes. Curr Opin Neurobiol 20:306–312. 10.1016/j.conb.2010.03.006 [DOI] [PubMed] [Google Scholar]
- Wright NC, Wessel R (2017) Network activity influences the subthreshold and spiking visual responses of pyramidal neurons in the three-layer turtle cortex. J Neurophysiol 118:2142–2155. 10.1152/jn.00340.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wright NC, Hoseini MS, Wessel R (2017a) Adaptation modulates correlated subthreshold response variability in visual cortex. J Neurophysiol 118:1257–1269. 10.1152/jn.00124.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wright NC, Hoseini MS, Yasar TB, Wessel R (2017b) Coupling of synaptic inputs to local cortical activity differs among neurons and adapts after stimulus onset. J Neurophysiol 118:3345–3359. 10.1152/jn.00398.2017 [DOI] [PubMed] [Google Scholar]
- Yaşar TB, Wright NC, Wessel R (2016) Inferring presynaptic population spiking from single-trial membrane potential recordings. J Neurosci Methods 259:13–21. 10.1016/j.jneumeth.2015.11.019 [DOI] [PubMed] [Google Scholar]
- Yu S, Klaus A, Yang H, Plenz D (2014) Scale-invariant neuronal avalanche dynamics and the cut-off in size distributions. PLoS One 9:e99761. 10.1371/journal.pone.0099761 [DOI] [PMC free article] [PubMed] [Google Scholar]