Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Dec 12.
Published in final edited form as: Neural Comput. 2017 Aug 4;29(10):2581–2632. doi: 10.1162/neco_a_01008

Differential Covariance: A New Class of Methods to Estimate Sparse Connectivity from Neural Recordings

Tiger W Lin 1, Anup Das 2, Giri P Krishnan 3, Maxim Bazhenov 4, Terrence J Sejnowski 5
PMCID: PMC5726979  NIHMSID: NIHMS922084  PMID: 28777719

Abstract

With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005; Pillow et al., 2008), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals.

1 Introduction

Simultaneous recording of large populations of neurons is an inexorable trend in current neuroscience research (Kandel, Markram, Matthews, Yuste, & Koch, 2013). Over the last five decades, the number of simultaneously recorded neurons has doubled approximately every seven years (Stevenson & Kording, 2011). One way to make sense of these big data is to measure the functional connectivity between neurons (Friston, 2011) and link the function of the neural circuit to behavior. As previously reviewed (Stevenson, Rebesco, Miller, & Körding, 2008), correlation-based methods have been used to estimate functional connectivity for a long time. However, they are suffering from the problem of explaining away unobserved common inputs (Stevenson et al., 2008), which makes it difficult to interpret the link between the estimated correlation and the physiological network. More recently, Okatan, Wilson, and Brown (2005), Truccolo et al. (2005), and Pillow et al. (2008) applied the generalized linear model to spike train data and showed good probabilistic modeling of the data.

To overcome these issues, we developed a new class of methods that use not only the spike trains but the voltages of the neurons. They achieve better performance than the GLM method but are free from the Poisson process model and require fewer data samples. They provide directionality information about sources and sinks in a network, which is important to determine the hierarchical structure of a neural circuit.

In this article, we further show that our differential covariance method is equivalent to a regularized second-order multivariate autoregressive model. The multivariate autoregressive (MAR) model has been used to analyze neuroscience data (Friston, 2011; McIntosh & Gonzalez-Lima, 1991). In particular, this model with an arbitrary order has been discussed previously (Harrison, Penny, & Friston, 2003). Continuous AR process, which is known as the Ornstein-Uhlenbeck (OU) process (Uhlenbeck & Ornstein, 1930), has also been applied to model neuronal activity (Burkitt, 2006; Ricciardi & Sacerdote, 1979; Lánsky` & Rospars, 1995). However, the modified AR(2) model in this has not been used previously.

All of the data that we analyze in this article are simulated data so that we can compare different methods with ground truth. We first generated data using a simple passive neuron model and provided theoretical proof for why the new methods perform better. Then we used a more realistic Hodgkin-Huxley (HH)–based thalamocortical model to simulate intracellular recordings and local field potential data. This model can successfully generate sleep patterns such as spindles and slow waves (Bazhenov, Timofeev, Steriade, & Sejnowski, 2002; Chen, Chauvette, Skorheim, Timofeev, & Bazhenov, 2012; Bonjean et al., 2011). Since the model has a cortical layer and a thalamic layer, we further assume that the neural signals in the cortical layer are visible by the recording instruments, while those from the thalamic layer are not. This is a reasonable assumption for many experiments. Since the thalamus is a deep brain structure, most experiments involve only measurements from the cortex.

Next, we simulated 1000 Hodgkin-Huxley neurons networks with 80% excitatory neurons and 20% inhibitory neurons sparsely connected. As in real experiments, we recorded simulated calcium signals from only a small percentage of the network (50 neurons) and compared the performance of different methods. In all simulations, our differential covariance-based methods achieve performance better than or similar to the GLM method, and in the LFP and calcium imaging data sets, they achieve the same performance with fewer data samples.

The article is organized as follow. In section 2, we introduce our new methods. In section 3, we show the performance of all methods and explain why our methods perform better. In section 4, we discuss the advantage and generalizability of our methods. We also propose a few improvements for the future.

2 Methods

2.1 Simulation Models Used to Benchmark the Methods

2.1.1 Passive Neuron Model

To validate and test our new methods, we first developed a passive neuron model. Because of its simplicity, we can provide some theoretical proof for why our new class of methods is better. Every neuron in this model has a passive cell body with capacitance C and a leakage channel with conductance gl. Neurons are connected with a directional synaptic conductance gsyn; for example, neuron i receives inputs from neurons i − 4 and i − 3:

CdVidt=gsynVi-4+gsynVi-3+glVi+Ni. (2.1)

Here, we let C = 1, gsyn = 3, gl = −5, and 𝒩i is gaussian noise with a standard deviation of 1. The connection pattern is shown in Figure 3A. There are 60 neurons in this circuit. The first 50 neurons are connected with a pattern in which neuron i projects to neurons i + 3 and i + 4. To make the case more realistic, aside from these 50 neurons that are visible, we added 10 more neurons (neuron 51 to 60 in Figure 3A) that are invisible during our estimations (only the membrane voltages of the first 50 neurons are used to estimate connectivity). These 10 neurons send latent inputs to the visible neurons and introduce external correlations into the system. Therefore, we update our passive neuron’s model as

Figure 3.

Figure 3

Ground-truth connections from a passive neuron model and estimations from correlation-based methods. (A) Ground-truth connection matrix. Neurons 1 to 50 are observed neurons. Neurons 51 to 60 are unobserved neurons that introduce common inputs. (B) Estimation from the correlation method. (C) Estimation from the precision matrix. (D) Estimation from the sparse + latent regularized precision matrix. (E) Zoom-in of panel A. (F) Zoom-in of panel B. (G) Zoom-in of panel C. (H) Zoom-in of panel D.

CdVidt=gsynVi-4+gsynVi-3+glatentVlatent1+glVi+Ni, (2.2)

where glatent = 10. We choose the latent input’s strength in the same scale as other connections in the network. We tested multiple values between [0, 10]; higher value generates more interference and therefore makes the case more difficult.

We added the latent inputs to the system because unobserved common inputs exist in real-world problems (Stevenson et al., 2008). For example, one could be using two-photon imaging to record calcium signals from the cortical circuit. The cortical circuit might receive synaptic inputs from deeper layers in the brain, such as the thalamus, which is not visible to the microscope. Each invisible neuron projects to many visible neurons, leading to common synaptic currents to the cortical circuit, and causes neurons in the cortical circuit to be highly correlated. We will discuss how to remove interference from the latent inputs.

2.1.2 Thalamocortical Model

To test and benchmark the differential covariance-based methods in a more realistic model, we simulated neural signals from a Hodgkin-Huxley–based spiking neural network. The thalamocortical model used in this study was based on several previous studies, which were used to model spindle and slow wave activity (Bazhenov et al., 2002; Chen et al., 2012; Bonjean et al., 2011). Aschematic of the thalamocortical model in this work is shown in Figure 1. As shown, the thalamocortical model was structured as a one-dimensional, multilayer array of cells. The thalamocortical network consisted of 50 cortical pyramidal (PY) neurons, 10 cortical inhibitory (IN) neurons, 10 thalamic relay (TC) neurons, and 10 reticular (RE) neurons. The connections between the 50 PY neurons follow the pattern in the passive neuron model and are shown in Figure 7A. For the rest of the connection types, a neuron connects to all target neurons within the radius listed in Table 1 (Bazhenov et al., 2002). The network is driven by spontaneous oscillations. Details of the model are explained in appendix C.

Figure 1.

Figure 1

Network model for the thalamocortical interactions, with four layers of neurons: thalamocortical (TC), reticular nucleus (RE) of the thalamus, cortical pyramidal (PY), and inhibitory (IN) neurons. Filled circles indicate excitatory synapses; open circles indicate inhibitory synapses.

Figure 7.

Figure 7

Analysis of the thalamocortical model with correlation-based methods. (A) Ground-truth connections of the PY neurons in the thalamocortical model. (B) Estimation from the correlation method. (C) Estimation from the precision matrix method. (D) Estimation from the sparse + latent regularized precision matrix method. (E) Zoom-in of panel A. (F) Zoom-in of panel B. (G) Zoom-in of panel C. (H) Zoom-in of panel D.

Table 1.

Connectivity Properties.

PY→TC PY→RE TC→PY TC→IN PY→IN IN→PY RE→RE TC→RE RE→TC
Radius 8 6 15 3 1 5 5 8 8

For each simulation, we simulated the network for 600 secs. The data were sampled at 1000 Hz.

2.1.3 Local Field Potential (LFP) Model

To simulate local field potential (LFP) data, we expanded the previous thalamocortical model by 10 times (see Figure 2). Each connection in the previous network (see Figure 1) is transformed into a fiber bundle connecting 10 pairs of neurons in a subnetwork. For cortical pyramidal neurons (PY), we connect each neuron to its four neighboring neurons inside the subnetwork. Other settings for the neurons and the synapses are the same as the previous thalamocortical model. We simulated the network for 600 secs. The data were sampled at 1000 Hz.

Figure 2.

Figure 2

The LFP model was transformed from the thalamocortical model. In this figure, we used the TC→PY connection in the red box as an example. Each neuron in the original thalamocortical model was expanded into a subnetwork of 10 neurons. Each connection in the original thalamocortical model was transformed into 10 parallel connections between two subnetworks. Moreover, subnetworks transformed from PY neurons have local connections. Each PY neuron in this subnetwork connects to its two nearest neighbors on each side. We put an LFP electrode at the center of each PY subnetwork. The electrode received neurons’ signals inversely proportional to the distance.

We plant an LFP electrode at the center of each of the 50 PY neuron local circuits. The distance between the 10 local neurons is 100 μm, and the distance between the PY subnetworks is 1 cm. The LFP is calculated according to the standard model in Bonjean et al. (2011), Destexhe (1998), and Nunez and Srinivasan (2005). The LFPs are mainly contributed by elongated dendrites of the cortical pyramidal neurons. In our model, each cortical pyramidal neuron has a 2 mm long dendrite.

For each LFP Si,

Si=j(Isynrdi-Isynrsi)j, (2.3)

where the sum is taken over all excitatory cortical neurons. Isyn is the postsynaptic current of neuron j, rd is the distance from the electrode to the center of the dendrite of neuron j, and rs is the distance from the electrode to the soma of neuron j.

2.1.4 Calcium Imaging Model

Vogelstein et al. (2009) proposed a transfer function between spike trains and calcium fluorescence signals,

[Ca]t-[Ca]t-1=-ΔtτCa[Ca]t-1+ACantF=[Ca][Ca]+Kd+ηt, (2.4)

where ACa = 50 μM is a step influx of calcium molecules at each action potential. nt is the number of spikes at each time step, Kd = 300 μM is the saturation concentration of calcium, and ηt is a gaussian noise with a standard deviation of 0.000003. Since our data are sampled at 1000 Hz, we can resolve every single action potential. So in our data, there are no multiple spikes at one time step. τCa = 1 s is the decay constant for calcium molecules. To maintain the information in the differential signal, instead of setting a hard cutoff value and transforming the intracellular voltages to binary spike trains, we use a sigmoid function to transform the voltages to the calcium influx activation parameter (nt ),

nt=11+e-(V(t)-Vthre), (2.5)

where Vthre = −50 mV is the threshold potential.

In real experiments, we can image only a small percentage of neurons in the brain. Therefore, we simulated HH-based networks of 1000 neurons and record from only 50 neurons. We used the four connection patterns (normal-1 ~ normal-4) provided online (https://www.kaggle.com/c/connectomics) (Stetter, Battaglia, Soriano, & Geisel, 2012). Briefly, the networks have 80% excitatory neurons and 20% inhibitory neurons. The connection probability is 0.01—one neuron connects to about 10 neurons— so it is a sparsely connected network.

Similar to our thalamocortical model, we used AMPA and NMDA synapses for the excitatory synapses and GABA synapses for the inhibitory synapses. The simulations ran 600 sec and were sampled at 1000 Hz. The intracellular voltages obtained from the simulations were then transferred to calcium fluorescence signals and downsampled to 50 Hz. For each of the four networks, we conducted 25 recordings. Each recording contains calcium signals from 50 randomly selected neurons.

For accurate estimations, the differential covariance-based methods require the reconstructed action potentials from the calcium imaging. While this is an active research area and many methods have been proposed (Quan, Liu, Lv, Chen, & Zeng, 2010; Rahmati, Kirmse, Marković, Holthoff, & Kiebel, 2016), in this study, we simply reversed the transfer function. By assuming the transfer function from action potentials to calcium fluorescence signals is known, we can reconstruct the action potentials. Given as the observed fluorescence signal,

[C^a]=F^Kd/(1-F^)n^t=(d[C^a]+1/τCa[Ca]t)/(A/Δt)V^=log(1/nt-1). (2.6)

2.2 Differential Covariance-Based Methods

In this section, we introduce a new class of methods to estimate the functional connectivity of neurons (code is provided online at https://github.com/tigerwlin/diffCov).

2.2.1 Step 1: Differential Covariance

The input to the method, V(t), is an N × T neural recording data set. N is the number of neurons or channels recorded, and T is the number of data samples during recordings. We compute the derivative of each time series with dV(t) = (V(t + 1) −V(t − 1))/(2dt). Then the covariance between V(t) and dV(t) is computed and denoted as ΔC, which is an N × N matrix defined as

ΔCi,j=cov(dVi(t),Vj(t)), (2.7)

where dVi(t) is the differential signal of neuron/channel i, Vj (t) is the signal of neuron/channel j, and cov() is the sample covariance function for two time series. In appendix A, we provide a theoretical proof about why the differential covariance estimator generates fewer false connections than the covariance-based methods do.

2.2.2 Step 2: Applying Partial Covariance Method

As Stevenson et al. (2008) previously mentioned, one problem of the correlation method is the propagation of correlation. Here we designed a customized partial covariance algorithm to reduce this type of error in our methods. We use ΔPi, j to denote the differential covariance estimation after applying the partial covariance method.

Using the derivation from section B.2, we have

ΔPi,j=ΔCi,j-COVj,z·COVZ,Z-1·ΔCi,ZT, (2.8)

where Z is a set of all neurons/channels except {i, j}:

Z={1,2,,i-1,i+1,,j-1,j+1,N}.

ΔCi, j and ΔCi,Z were computed from section 2.2.1, and COVZ,Z is the covariance matrix of set Z. COVj,Z is a flat matrix denoting the covariance of neuron j and neurons in set Z. · is the matrix dot product.

As we explain in section B.2, the partial covariance of the differential covariance is not equivalent to the inverse of the differential covariance estimation. The two are equivalent only for the covariance matrix and when the partial correlation is controlling on all variables in the observed set. In our case, the differential covariance matrix is nonsymmetric because it is the covariance between recorded signals and their differential signals. We have signals and differential signals in our observed set; however, we are controlling only on the original signals for the partial covariance algorithm. Due to these differences, we developed this customized partial covariance algorithm, equation 2.8, which performs well for neural signals in the form of equation 2.2.

2.2.3 Step 3: Sparse Latent Regularization

Finally, we applied the sparse latent regularization method to partial covariance version of the differential covariance (Chandrasekaran, Sanghavi, Parrilo, & Willsky, 2011; Yatsenko et al., 2015). As explained in section B.3, in the sparse latent regularization method, we made the assumption that there are observed neurons and unobserved common inputs in a network. If the connections between the observed neurons are sparse and the number of unobserved common inputs is small, this method can separate the covariance into two parts and the sparse matrix is the intrinsic connections between the observed neurons.

Here we define ΔS as the sparse result from the method and L as the low-rank result from the method. Then we solve arg min

argminΔS,L||ΔS||1+αtr(L) (2.9)

under the constraint that

ΔP=ΔS+L, (2.10)

where || ||1 is the L1-norm of a matrix and tr() is the trace of a matrix. α is the penalty ratio between the L1-norm of ΔS and the trace of L. It was set to 1/N for all our estimations. ΔP is the partial differential covariance computed from section 2.2.2. We receive a sparse estimation, ΔS, of the connectivity.

2.2.4 Computing Derivative

In differential covariance, computing the derivative using dV(t) = (V(t + 1) −V(t − 1))/(2dt) provides better estimation results than using dV(t + 1) = (V(t + 1) −V(t))/dt. To elaborate this point, we first explain our method’s connection to the autoregressive (AR) method.

Detailed discussion of the multivariate autoregressive model and its estimators has been discussed in Harrison et al. (2003). In discrete space, our differential covariance estimator is similar to the mean squared error (MSE) estimator of the AR model. Following the definition in equation 2.2, we have

CdV(t)dt=G·V(t)+N, (2.11)

where V(t) are neurons’ membrane voltages. G is the connection matrix that describes the conductance between each pair of neurons. 𝒩 is the gaussian noise.

For dV(t) = (V(t + 1) −V(t − 1))/(2dt), we note here:

CdV(t)dt=CV(t+1)-V(t-1)2Δt=G·V(t)+N,V(t+1)=2ΔtCG·V(t)+V(t-1)+2ΔtCN. (2.12)

As Harrison et al. (2003) explained, in the AR model, the MSE estimator of G is (V(t+1)-V(t-1))V(t)TV(t)V(t)T. We note that the numerator of this MSE estimator is our differential covariance estimator. Therefore, in this case, the model we proposed is equivalent to a regularized AR(2) model, where the transition matrix of the second order is restricted to be an identity matrix.

In the dV(t) = (V(t + 1) −V(t))/dt case, we note that the differential covariance estimator is similar to an AR(1) estimator,

CdV(t)dt=CV(t+1)-V(t)Δt=G·V(t)+N,V(t+1)=(ΔtCG+I)·V(t)+ΔtCN, (2.13)

and the MSE estimator of (ΔtCG+I) is V(t+1)V(t)TV(t)V(t)T. Therefore:

ΔtCG=V(t+1)V(t)TV(t)V(t)T-I=(V(t+1)-V(t))V(t)TV(t)V(t)T, (2.14)

where the numerator of this MSE estimator is our differential covariance estimator.

As explained above, using different methods to compute the derivative will produce different differential covariance estimators, and they are equivalent to estimators from different AR models for the connection matrix. In section 3, we show that the performances of these two estimators are significantly different.

2.3 Performance Quantification

The performance of each method is judged by four quantified values. The first three values indicate the method’s abilities to reduce the three types of false connections (see Figure 4). The last one indicates the method’s ability to correctly estimate the true positive connections against all possible interference.

Figure 4.

Figure 4

Illustrations of the three types of false connections in the correlation-based methods. Solid lines indicate the physical wiring between neurons, and the filled circles at the end indicate the synaptic contacts (i.e., the direction of the connections). The dotted lines are the false connections introduced by the correlation-based methods. (A) Type 1 false connections, which are due to two neurons receiving the same synaptic inputs. (B) Type 2 false connections, which are due to the propagation of correlation. (C) Type 3 false connections, which are due to unobserved common inputs.

We define G as the ground-truth connectivity matrix, where:

Gi,j={1,ifneuroniprojectstoneuronjwithexcitatorysynapse-1,ifneuroniprojectstoneuronjwithinhibitorysynapse.0,otherwise (2.15)

Then we can use a three-dimensional tensor to represent the false connections caused by common inputs. For example, neurons j and k receive common input from neuron i:

Mi,j,k={1,iffGi,j=1andGi,k=10,otherwise. (2.16)

Therefore, we can compute a mask that labels all of the type 1 false connections:

mask1j,k=i{observedneurons}Mi,j,k. (2.17)

For the type 2 false connections (e.g., neuron i projects to neuron k; then neuron k projects to neuron j), the mask is defined as

mask2i,j=k{observedneurons}Gi,kGk,j (2.18)

or, in simple matrix notation,

mask2=G·G. (2.19)

Similar to mask1, the false connections caused by unobserved common inputs are

mask3j,k=i{unobservedneurons}Mi,j,k. (2.20)

Finally, mask4 is defined as

mask4i,j={1,ifGi,j=00,otherwise. (2.21)

Given a connectivity matrix estimation result, Est, the four values for the performance quantification are computed as the area under the ROC curve for two sets: the true positive set and the false positive set:

Esti,j{truepositiveset}k,ifGi,j0andmaskki,j=0,Esti,j{falsepositiveset}k,ifmaskki,j0andGi,j=0. (2.22)

3 Results

3.1 False Connections in Correlation-Based Methods

When applied to neural circuits, the commonly used correlation-based methods produce systematic false connections. As shown, Figure 3A is the ground truth of the connections in our passive neuron model (neurons 1 to 50 are the observed neurons). Figure 3B is from the correlation method, Figure 3C is the precision matrix, and Figure 3D is the sparse + latent regularized precision matrix. As shown, all of these methods produce extra false connections.

For convenience of explanation, we define the diagonal strip of connections in the ground truth (the first 50 neurons in Figure 3A) as the −3 and −4 diagonal lines, because they are three and four steps away from the diagonal line of the matrix. As shown in Figure 3, all of these methods produce false connections on the ±1 diagonal lines. The precision matrix method (see Figure 3C) also has square box shape of false connections in the background.

3.1.1 Type 1 False Connections

The type 1 false connections shown in Figure 4A are produced because two neurons receive the same input from another neuron. The same synaptic current that passes into the two neurons generates a positive correlation between the two neurons. However, there is no physiological connection between these two neurons. In our connection pattern (see Figure 3A), we notice that two neurons next to each other receive common synaptic inputs; therefore, there are false connections on the ±1 diagonal lines of the correlation-based estimations.

3.1.2 Type 2 False Connections

The type 2 false connections shown in Figure 4B are due to the propagation of correlation. Because one neuron VA connects to another neuron VB and neuron VB connects to another neuron VC, the correlation method presents the correlation between VA and VC, which do not have a physical connection. This phenomenon is shown in Figure 3B as the extra diagonal strips. This problem, shown in Figure 3C can be greatly reduced by the precision matrix/partial covariance method.

3.1.3 Type 3 False Connections

The type 3 false connections shown in Figure 4C are also due to the common currents that pass into two neurons. However, in this case, they are from the unobserved neurons. For this particular passive neuron model, it is due to the inputs from the 10 unobserved neurons (neurons 51–60) as shown in Figure 3A. Because the latent neurons have broad connections to the observed neurons, they introduce a square box shape correlation pattern into the estimations. (See Figure 3C. Figure 3B also contains this error, but it is hard to see.) Since the latent neurons are not observable, the partial covariance method cannot be used to regress out this type of correlation. The sparse latent regularization can be applied if the sparse and low-rank assumption is valid, and the sparse + latent regularized result is shown in Figure 3D. However, even after using this regularization, the correlation-based methods still leave false connections in Figure 3D.

3.2 Estimations from Differential Covariance-Based Methods

Comparing the ground-truth connections in Figure 5A with our final estimation in Figure 5D, we see that our methods essentially transformed the connections in the ground truth into a map of sources and sinks in a network. An excitatory connection, ij, in our estimations has a negative value for ΔSi j and a positive value for ΔSji, which means the current is coming out of the source i and goes into the sink j. We note that there is another ambiguous case, an inhibitory connection ji, which produces the same results in our estimations. Our methods cannot differentiate these two cases; instead, they indicate sources and sinks in a network.

Figure 5.

Figure 5

Differential covariance analysis of the passive neuron model. The color in panels B, C, D, F, G, and H indicate the direction of the connections. For element Ai j, a warm color indicates i is the sink and j is the source—that is, ij—and the cool color indicates j is the sink and i is the source—that is, ij. (A) Ground-truth connection matrix. (B) Estimation from the differential covariance method. (C) Estimation from the partial differential covariance method. (D) Estimation from the sparse + latent regularized partial differential covariance method. (E) Zoom-in of panel A. (F) Zoom-in of panel B. (G) Zoom-in of panel C. (H) Zoom-in of panel D.

3.2.1 The Differential Covariance Method Reduces Type 1 False Connections

By comparing Figure 3B with Figure 5B, we see that the type 1 false connections on the ±1 diagonal lines of Figure 3B are reduced in Figure 5B. This is reflecting the theorems we prove in appendix A, in particular theorem 5, which shows that the strength of the type 1 false connections is reduced in the differential covariance method by a factor of gl/gsyn. Moreover, the performance of the differential covariance method on reducing type 1 false connections is quantified in Figure 6.

Figure 6.

Figure 6

Performance quantification (area under the ROC curve) of different methods with respect to their abilities to reduce the three types of false connections and their abilities to estimate the true positive connections using the passive neuron data set.

3.2.2 The Partial Covariance Method Reduces Type 2 False Connections

Second, we see that due to the propagation of correlation, there are extra diagonal strips in Figure 5B. These are removed in Figure 5C by applying the partial covariance method. Each estimator’s performance for reducing type 2 false connections is quantified in Figure 6.

3.2.3 The Sparse + Latent Regularization Reduces Type 3 False Connections

Third, the sparse + latent regularization to remove the correlation is introduced by the latent inputs. When the observed neurons’ connections are sparse and the number of unobserved common inputs is small, the covariance introduced by the unobserved common inputs can be removed. As shown in Figure 5D, the external covariance in the background of Figure 5C is removed, while the true diagonal connections and the directionality of the connections are maintained. This regularization is also effective for correlation-based methods, but type 1 false connections are maintained in the estimation even after applying this regularization (see Figure 3D). Each estimator’s performance for reducing type 3 false connections is quantified in Figure 6.

3.2.4 The Differential Covariance-Based Methods Provide Directionality Information of the Connections

Using this passive neuron model, in section A.3, we provide a mathematical explanation for why the differential covariance-based methods provide directional information for the connections. Given an excitatory connection gij (neuron i projects to neuron j), from theorem 6 in section A.3, we have

ΔCj,i>0,ΔCi,j<0. (3.1)

We note here that there is another ambiguous setting that provides the same result, which is an inhibitory connection gji. Conceptually, the differential covariance indicates the current sources and sinks in a neural circuit, but the exact type of synapse is unknown.

3.2.5 Performance Quantification for the Passive Neuron Data Set

In Figure 6, we quantified the performance of the estimators for one example data set. We see that with the increase in the sample size, our differential covariance-based methods reduce the three types of false connections, while maintaining high true positive rates. Also, as we apply more advanced techniques (ΔC → ΔP → ΔS), the estimator’s performance increases in all four panels of the quantification indices. Although the precision matrix and the sparse + latent regularization help the correlation method reduce type 2 and type 3 errors, all correlation-based methods handle the type 1 false connections poorly. We also note that the masks we used to quantify each type of false connections are not mutually exclusive (i.e., there are false connections that belong to more than one type of false connections). Therefore, in Figure 6, it seems that a regularization is reducing the multiple types of false connections. For example, the sparse + latent regularization is reducing both type 2 and type 3 false connections.

In Table 2, we provide quantified results (area under the ROC curve) for two connection patterns (cxcx34, and cxcx56789) and three conductance settings (g5, g30, and g50). We see that the key results in Figure 6 are also generalized here. By applying more advanced techniques to the original differential covariance estimator (ΔC → ΔP → ΔS), the performance increases with respect to the three types of error, while the true positive rate is not sacrificed. We also note that although the precision matrix and the sparse + latent regularization help the correlation method reduce type 2 and type 3 errors, all correlation-based methods handle the type 1 error poorly.

Table 2.

Performance Quantification (Area under the ROC Curve) of Different Methods with Respect to Their Abilities to Reduce the Three Types of False Connections and Their Abilities to Estimate the True Positive Connections under Five Different Passive Neuron Model Settings.

Cov Precision Precision+SL-reg ΔC ΔP ΔS
cxcx34 g5
 Error 1 0 0 0 0.6327 0.3469 0.8776
 Error 2 0.1469 0.9520 0.9915 0.3757 0.8347 1.0000
 Error 3 0.4638 0.9362 0.9797 0.6541 0.8391 0.9986
 True positive 0.7312 1.0000 1.0000 0.9677 0.9946 1.0000
cxcx34 g30
 Error 1 0 0 0 0.0510 0.5816 0.9490
 Error 2 0.0056 0.8927 0.9972 0.2881 0.9548 1.0000
 Error 3 0.2164 0.9188 0.9942 0.5430 0.9662 1.0000
 True positive 0.5591 1.0000 0.9892 0.6559 1.0000 1.0000
cxcx34 g50
 Error 1 0 0 0 0 0.2041 0.6531
 Error 2 0 0.7034 0.9944 0.0523 0.9054 1.0000
 Error 3 0.3179 0.8000 0.9894 0.4145 0.9309 1.0000
 True positive 0.9140 1.0000 0.9946 0.9516 0.9785 1.0000
cxcx56789 g5
 Error 1 0 0.0053 0.0053 0.6895 0.6263 0.8526
 Error 2 0.1896 0.6229 0.7896 0.5240 0.7896 0.9938
 Error 3 0.3573 0.6085 0.7659 0.6957 0.7591 0.9817
 True positive 0.6884 0.9442 0.6674 0.9930 0.9605 0.9837
cxcx56789 g50
 Error 1 0 0 0 0.0263 0.2816 0.6842
 Error 2 0.1083 0.5312 0.8240 0.2844 0.6990 0.9979
 Error 3 0.4256 0.4927 0.7762 0.5091 0.7116 0.9835
 True positive 0.9256 0.9116 0.6698 0.9395 0.9279 0.9419

3.3 Thalamocortical Model Results

We tested the methods further in a more realistic Hodgkin-Huxley–based model. Because the synaptic conductances in the model are no longer constants but become nonlinear dynamic functions, which depend on the presynaptic voltages, the previous derivations can be considered only a first-order approximation.

The ground-truth connections between the cortical neurons are shown in Figure 7. These neurons also receive latent inputs from and send feedback currents to inhibitory neurons in the cortex (IN) and thalamic neurons (TC). For clarity of representation, these latent connections are not shown here; the detailed connections are described in section 2.

Similar to the passive neuron model, in Figure 7B, the correlation method still suffers from those three types of false connections. As shown, the latent inputs generate false correlations in the background, and the ±1 diagonal line false connections, which are due to the common currents, exist in all correlation-based methods (see Figures 7B–7D). Comparing Figures 7C and 7D because the type 1 false connections are strong in the Hodgkin- Huxley–based model, the sparse + latent regularization removed the true connections but kept these false connections in its final estimation.

As shown in Figure 8B, differential covariance method reduces the type 1 false connections. In Figure 8C, the partial differential covariance method reduces type 2 false connections in Figure 8B (the yellow connections around the red strip in Figure 8B). Finally, in Figure 8D, the sparse latent regularization removes the external covariance in the background of Figure 8C. The current sources (positive value, red) and current sinks (negative value, blue) in the network are also indicated on our estimators.

Figure 8.

Figure 8

Analysis of the thalamocortical model with differential covariance-based methods. The color in panels B, C, D, F, G, and H indicates the direction of the connections. For element Ai j, a warm color indicates i is the sink and j is the source—ij. A cool color indicates j is the sink and i is the source—ij. (A) Ground-truth connection matrix. (B) Estimation from the differential covariance method. (C) Estimation from the partial differential covariance method. (D) Estimation from the sparse + latent regularized partial differential covariance method. (E) Zoom-in of panel A. (F) Zoom-in of panel B. (G) Zoom-in of panel C. (H) Zoom-in of panel D.

In Figure 9, each estimator’s performance on each type of false connections is quantified. In this example, our differential covariance-based methods achieve similar performance to the GLM method.

Figure 9.

Figure 9

Performance quantification (area under the ROC curve) of different methods with respect to their abilities to reduce the three types of false connections and their abilities to estimate the true positive connections using the thalamocortical data set.

3.4 Simulated LFP Results

For population recordings, our methods have similar performance to the thalamocortical model example. While the correlation-based methods are still suffering from the problem of type 1 false connections (see Figure 10), our differential covariance-based methods can reduce all three types of false connections (see Figure 11). In Figure 12, each estimator’s performance on LFP data is quantified. In this example, with sufficient data samples, our differential covariance-based methods achieve performance similar to that of the GLM method. For smaller sample sizes, our new methods perform better than the GLM method.

Figure 10.

Figure 10

Analysis of the simulated LFP data with correlation-based methods. (A) Ground-truth connection matrix. (B) Estimation from the correlation method. (C) z-score of the correlation matrix. (D) Estimation from the precision matrix method. (E) Estimation from the sparse + latent regularized precision matrix method. (F) Zoom-in of panel A. (G) Zoom-in of panel B. (H) Zoom-in of panel C. (I) Zoom-in of panel D. (J) Zoom-in of panel E.

Figure 11.

Figure 11

Analysis of the simulated LFP data with differential covariance-based methods. The color in panels B, C, D, F, G, and H indicates the direction of the connections. For element Ai j, a warm color indicates i is the sink and j is the source—ij. A cool color indicates j is the sink and i is the source—ij. (A) Ground-truth connection matrix. (B) Estimation from the differential covariance method. (C) Estimation from the partial differential covariance method. (D) Estimation from the sparse + latent regularized partial differential covariance method. (E) Zoom-in of panel A. (F) Zoom-in of panel B. (G) Zoom-in of panel C. (H) Zoom-in of panel D.

Figure 12.

Figure 12

Performance quantification (area under the ROC curve) of different methods with respect to their abilities to reduce the three types of false connections and their abilities to estimate the true positive connections using the simulated LFP data set.

3.5 Simulated Calcium Imaging Results

Because current techniques allow recording of only a small percentage of neurons in the brain, we tested our methods on a calcium imaging data set of 50 neurons recorded from networks of 1000 neurons. In this example, our differential covariance-based methods (see Figure 14) match better with the ground truth than the correlation-based methods (see Figure 13).

Figure 14.

Figure 14

Analysis of the simulated calcium imaging data set with differential covariance-based methods. The color in panels B–D indicates the direction of the connections. For element Ai j, a warm color indicates i is the sink and j is the source—ij. A cool color indicates j is the sink and i is the source—ij. (A) Ground truth connection matrix. (B) Estimation from the differential covariance method. (C) Estimation from the partial differential covariance method. (D) Estimation from the sparse + latent regularized partial differential covariance method. For clarity, panels B, C, and D are thresholded to show only the strongest connections, so one can compare it with the ground truth.

Figure 13.

Figure 13

Analysis of the simulated calcium imaging data set with correlation-based methods. (A) Ground-truth connection matrix. (B) Estimation from the correlation method. (C) Estimation from the precision matrix method. (D) Estimation from the sparse + latent regularized precision matrix method. For clarity, panels B, C, and D are thresholded to show only the strongest connections, so one can compare it with the ground truth.

We performed 25 sets of recordings with 50 neurons randomly selected in each of the four large networks and quantified the results. The markers on the plots in Figure 15 are the average area under the ROC curve values across the 100 sets, and the error bars indicate the standard deviations across these 100 sets of recordings. Our differential covariance-based methods perform better than the GLM method, and the performance differences seem to be greater in situations with fewer data samples.

4 Discussion

4.1 Generalizability and Applicability of the Differential Covariance-Based Methods to Real Experimental Data

Many methods have been proposed to solve the problem of reconstructing the connectivity of a neural network. Winterhalder et al. (2005) reviewed the nonparametric methods and Granger causality–based methods. Much progress has been made recently using the kinetic Ising model and Hopfield network (Huang, 2013; Dunn & Roudi, 2013; Battistin, Hertz, Tyrcha, & Roudi, 2015; Capone, Filosa, Gigante, Ricci-Tersenghi, & Del Giudice, 2015; Roudi & Hertz, 2011) with sparsity regularization (Pernice & Rotter, 2013). The GLM method (Okatan et al., 2005; Truccolo et al., 2005; Pillow et al., 2008) and the maximum entropy method (Schneidman, Berry, Segev, & Bialek, 2006) are two popular classes of methods and the main modern approaches for modeling multiunit recordings (Roudi, Dunn, & Hertz, 2015).

In current research, people are recording more and more neurons and looking for new data analysis techniques to handle bigger data with higher dimensionality. The field is in favor of algorithms that require fewer samples and scale well with dimensionality, but without sacrificing accuracy. Also, an algorithm that is model free or makes minimum assumptions about the hidden structure of the data has the potential to be applied to multiple types of neural recordings.

The key difference between our methods and other ones is that we use the relationship between a neuron’s differential voltage and its voltage rather than finding the relationship between voltages. This provides better performance because the differential voltage is a proxy for a neuron’s synaptic current. Also, the relationship between a neuron’s synaptic current and its input voltages is more linear, which is suitable for data analysis techniques like the covariance method. While this linear relationship holds only for our passive neuron model, we still see similar or better performance of our methods in our examples based on the Hodgkin-Huxley model, where we relaxed this assumption and allowed loops in the networks. This implies that this class of methods is still applicable even when the ion channels’ conductances vary nonlinearly with the voltages, which makes the linear relationship hold only weakly.

4.2 Caveats and Future Directions

One open question for the differential covariance-based methods is how to improve the way they handle neural signals that are nonlinearly transformed from the intracellular voltages. Currently, to achieve good performance in the calcium imaging example, we need to assume we know the exact transfer function and reversely reconstruct the action potentials. We find this reverse transform method to be prone to additive gaussian noise. Further study is needed to find a better way to preprocess calcium imaging data for the differential covariance–based methods.

Throughout our simulations, the differential covariance method has better performance and needs fewer data samples in some simulations but not in others. Further investigation is needed to understand where the improvement comes from.

Our Hodgkin-Huxley simulations did not include axonal or synaptic delays, a critical feature of a real neural circuit. Unfortunately, it is nontrivial to add this feature to our Hodgkin-Huxley model. Nevertheless, we tested our methods with the passive neuron model using the same connection patterns but with random synaptic delays between neurons. In appendix D, we show that for up to a 10 ms uniformly distributed synaptic delay pattern, our methods still outperform the correlation-based methods.

Supplementary Material

Acknowledgments

We thank Thomas Liu, and all members of the Computational Neurobiology Lab for providing helpful feedback. This research is supported by ONR MURI (N000141310672), the Office of Naval Research, MURI N00014- 13-1-0205, the Swartz Foundation, and the Howard Hughes Medical Institute.

Appendix A: Differential Covariance Derivations

In this appendix we first build a simple three-neuron network to demonstrate that our differential covariance–based methods can reduce type 1 false connections. Then we develop a generalized theory showing that the type 1 false connections’ strength is always lower in our differential covariance–based methods than the original correlation-based methods.

Figure 15.

Figure 15

Performance quantification (area under the ROC curve) of different methods with respect to their abilities to reduce the three types of false connections and their abilities to estimate the true positive connections using the simulated calcium imaging data set. The error bar is the standard deviation across 100 sets of experiments. Each experiment randomly recorded 50 neurons in a large network. The markers on the plots indicate the average area under the ROC curve values across the 100 sets of experiments.

A.1 A Three-Neuron Network

Let us assume a network of three neurons, where neuron A projects to neurons B and C:

IA=dVA/dt=glVA+NA,IB=dVB/dt=g1VA+glVB+NB,IC=dVC/dt=g2VA+glVC+NC. (A.1)

Here, the cell conductance is gl, neuron A’s synaptic connection strength to neuron B is g1, and neuron A’s synaptic connection strength to neuron C is g2. 𝒩A, 𝒩B, 𝒩C are independent white gaussian noises.

From equation 18 of Fan, Shan, Yuan, and Ren (2011), we can derive the covariance matrix of this network:

vec(COV)=-(GIn+InG)-1(DD)vec(Im), (A.2)

where

G=[glg1g20gl000gl]T (A.3)

is the transpose of the ground-truth connection of the network. And

D=[100010001] (A.4)

since each neuron receives independent noise. In is an identity matrix of the size of G, and Im is an identity matrix of the size of D. ⊗ is the Kronecker product, and vec() is the column vectorization function.

Therefore, we have the covariance matrix of the network as:

COV=[-1/(2gl)g1/(4gl2)g2/(4gl2)g1/(4gl2)-(g12+2gl2)/(4gl3)-(g1g2)/(4gl3)g2/(4gl2)-(g1g2)/(4gl3)-(g22+2gl2)/(4gl3)]. (A.5)

When computing the differential covariance, we plug in equation A.1—for example:

COV(IC,VB)=g2COV(VA,VB)+glCOV(VC,VB). (A.6)

Therefore, from equation A.5, we can compute the differential covariance as

ΔP=[-1/2g1/(4gl)g2/(4gl)-g1/(4gl)-1/20-g2/(4gl)0-1/2]. (A.7)

Notice that because the ratio between COV(VA, VB) and COV(VC, VB) is –gl/g2, in differential covariance, the type 1 false connection COV(IC, VB) has value 0.

A.2 Type 1 False Connection’s Strength Is Reduced in Differential Covariance

In this section, we propose a theory. A network consists of passive neurons in the following form,

Ii(t)=CdVi(t)dt=k{prei}gkiVk(t)+glVi(t)+dBi(t), (A.8)

where {prei} is the set of neurons that project to neuron i, gki is the synaptic conductance for the projection from neuron k to neuron i, and Bi(t) is a Brownian motion. Further assume that:

  • All neurons’ leakage conductance gl and membrane capacitance C are constants and the same.

  • There is no loop in the network.

  • gsyngl, where gsyn is the maximum of |gij|, for ∀i, j

Then we prove below that:

  • For two neurons that have physical connection, their covariance is O(gsyngl2).

  • For two neurons that do not have physical connection, their covariance is O(gsyn2gl3).

  • For two neurons that have physical connection, their differential covariance is O(gsyngl).

  • For two neurons that do not have physical connection, their differential covariance is O(gsyn3gl3).

  • The type 1 false connection’s strength is reduced in differential covariance.

Lemma 1

The asymptotic autocovariance of a neuron is

Cov[Vi,Vi]=-(1+2k{prei}gkiCov[Vk,Vi])/2gl. (A.9)
Proof

From equation 9 of Fan et al. (2011), we have

d(Vi-E[Vi])=k{prei}gki(Vk-E[Vk])dt+gl(Vi-E[Vi])+dBi(t), (A.10)

where E[] is the expectation operation. (t) is dropped from Vi(t) when the meaning is unambiguous.

From theorem 2 of Fan et al. (2011), integrating by parts using Itô calculus gives

d((Vi-E[Vi])(Vi-E[Vi]))=d(Vi-E[Vi])·(Vi-E[Vi])+(Vi-E[Vi])·d(Vi-E[Vi])+d[Vi-E[Vi],Vi-E[Vi]]. (A.11)

Taking the expectation of both sides with equation A.10 gives

dCov[Vi,Vi]=2(k{prei}gkiCov[Vk,Vi]+glCov[Vi,Vi])dt+dE[[Bi(t),Bi(t)]]. (A.12)

When t → +∞, equation A.12 becomes

0=2(k{prei}gkiCov[Vk(+),Vi(+)]+glCov[Vi(+),Vi(+)])+1,Cov[Vi(+),Vi(+)]=-(1+2k{prei}gkiCov[Vk(+),Vi(+)])/2gl. (A.13)

Lemma 2

The asymptotic covariance between two neurons is

Cov[Vi,Vj]=-(k{prei}gkiCov[Vk,Vj]+k{prej}gkjCov[Vk,Vi])/2gl (A.14)
Proof

From equation 9 of Fan et al. (2011), we have

d(Vi-E[Vi])=k{prei}gki(Vk-E[Vk])dt+gl(Vi-E[Vi])+dBi(t). (A.15)

From theorem 2 of Fan et al. (2011), integrating by parts using Itô calculus gives

d((Vi-E[Vi])(Vj-E[Vj]))=d(Vi-E[Vi])·(Vj-E[Vj])+(Vj-E[Vj])·d(Vi-E[Vi])+d[Vi-E[Vi],Vj-E[Vj]]. (A.16)

Taking the expectation of both sides with equation A.15 gives

dCov[Vi,Vi]=(k{prei}gkiCov[Vk,Vi]+k{prej}gkjCov[Vk,Vj]+2glCov[Vi,Vj])dt+dE[[Bi(t),Bj(t)]]. (A.17)

When t → +∞, equation A.17 becomes

0=k{prei}gkiCov[Vk(+),Vi(+)]+k{prej}gkjCov[Vk(+),Vj(+)]+2glCov[Vi(+),Vj(+)]Cov[Vi(+),Vj(+)]=-(k{prei}gkiCov[Vk(+),Vi(+)]+k{prej}gkjCov[Vk(+),Vj(+)])/2gl. (A.18)

Theorem 1

The autocovariance of a neuron is O(1gl). The covariance of two different neurons with or without physical connection is O(gsyngl2).

Proof

We prove this by induction.

The basis

The base case contains two neurons:

CdV1dt=glV1+N1,CdV2dt=g12V1+glV2+N2. (A.19)

From lemma 1, we have

Cov[V1,V1]=-1/2gl. (A.20)

Then, from lemma 2, we have

Cov[V1,V2]=-g12Cov[V1,V1]/2gl,Cov[V1,V2]=g12/4gl2. (A.21)

From lemma 1, we have

Cov[V2,V2]=-(1+2g12Cov[V1,V2])/2gl,Cov[V1,V2]=-1/2gl-g122/4gl3. (A.22)

So the statement holds.

The inductive step

If the statement holds for a network of n − 1 neurons, we add one more neuron to it.

Part 1

First, we prove that the covariance of any neuron with n is also O(gsyngl2).

From lemma 2, we have:

Cov[Vi,Vn]=-(k{prei}gkiCov[Vk,Vn]+k{pren}gknCov[Vk,Vi])/2gl, (A.23)

where {prei} are the neurons projecting to neuron i and {pren} are the neurons projecting to neuron n.

Note that because {pren} are neurons from the old network, Cov[Vk, Vi], k ∈ {pren} is at most O(1gl), and it is O(1gl) only when k = i.

Now we need to prove that Cov[Vk, Vn], k ∈ {prei} is also O(1gl). We prove this by contradiction. Suppose that Cov[Vk, Vn], k ∈ {prei} is larger than O(1gl). Then similar to equation A.23, we have:

For p ∈ { prei}

Cov[Vp,Vn]=-(k{prep}gkpCov[Vk,Vn]+k{pren}gknCov[Vk,Vp])/2gl. (A.24)

Here we separate the problem into two situations: cases 1 and 2.

Case 1: Neuron i projects to neuron n

Since there is no loop in the network, n ∉ {prei}. Therefore, Cov[Vk, Vp], k ∈ {pren}, p ∈ {prei} is the covariance of two neurons from the old network and is O(1gl). Cov[Vk, Vn], k ∈ {prep} must be larger than O(1gl), such that Cov[Vp, Vn], p ∈ {prei} is larger than O(1gl).

Therefore, if a neuron’s covariance with neuron n is larger than O(1gl), one of its antecedents’ covariance with neuron n is also larger than O(1gl). Since we assume there is no loop in this network, there must be at least one antecedent (say, neuron m) whose covariance with neuron n is larger than O(1gl) and it has no antecedent.

However, from lemma 2:

Cov[Vm,Vn]=-(k{pren}gknCov[Vk,Vm])/2gl. (A.25)

Since Cov[Vm, Vk]), k ∈ {pren} is O(1gl), Cov[Vm, Vn] is O(gsyngl2), which is smaller than O(1gl). This is a contradiction. So Cov[Vk, Vn], k ∈ {prei} is no larger than O(1gl). Therefore, Cov[Vi, Vn] is O(gsyngl2).

Case 2: Neuron i does not project to neuron n

Now, in equation A.24, it is possible that n ∈ {prei}. However, in case 1, we just proved that the covariance of any neuron that projects to neuron n is O(gsyngl2). Therefore, Cov[Vk, Vp], k ∈ {pren}, p ∈ {prei} is O(gsyngl2) regardless of whether p = n.

Then, similar to case 1, there must be an antecedent of neuron i (say neuron m), whose covariance with neuron n is larger than O(1gl) and it has no antecedent. Then from equation A.25, we know this is a contradiction. So Cov[Vi, Vn] is O(gsyngl2).

Part 2

Then for the autocovariance of neuron n,

Cov[Vn,Vn]=-(1+2k{pren}gknCov[Vk,Vn])/2gl. (A.26)

As we already proved that any neuron’s covariance with neuron n is O(gsyngl2), the dominant term in equation A.26 is −1/2gl. Therefore, the auto-covariance of neuron n is also O(1gl).

Theorem 2

The covariance of two neurons that are physically connected is O(gsyngl2). The covariance of two neurons that are not physically connected is O(gsyn2gl3).

Proof

From lemma 2, we have:

Cov[Vi,Vj]=-(k{prei}gkiCov[Vk,Vj]+k{prej}gkjCov[Vk,Vi])/2gl. (A.27)

If neuron i and neuron j are physically connected, let’s say ij; then i ∈ {pre j}. Thus, one of the Vk for k ∈ {pre j} is Vi. Therefore, Cov[Vk, Vi] is O(1gl). Since there is no loop in the network, j ∉ {prei}, so Cov[Vk, Vj] is O(gsyngl2). Therefore, Cov[Vi, Vj] is O(gsyngl2).

If neuron i and neuron j are not physically connected, we have i ∉ {pre j}, so Cov[Vk, Vi]) is O(gsyngl2). And j ∉ {prei}, so Cov[Vk, Vj]) is O(gsyngl2). Therefore, Cov[Vi, Vj] is O(gsyn2gl3).

Lemma 3

The differential covariance of two neurons,

Cov[dVidt,Vj]+Cov[Vi,dVjdt]=0. (A.28)
Proof

From lemma 2, we have

2glCov[Vi,Vj]+k{prei}gkiCov[Vk,Vj]+k{prej}gkjCov[Vk,Vi]=0,Cov[k{prei}gkiVk+glVi,Vj]+Cov[k{prej}gkjVk+glVj,Vi]=0,Cov[dVidt,Vj]+Cov[Vi,dVjdt]=0. (A.29)

Theorem 3

The differential covariance of two neurons that are physically connected is O(gsyngl).

Proof

Assume two neurons have physical connection as ij. The differential covariance of them is

Cov[dVidt,Vj]=Cov[k{prei}gkiVk+glVi,Vj]=k{prei}gkiCov[Vk,Vj]+glCov[Vi,Vj] (A.30)

From theorem 2, we know Cov[Vi, Vj] is O(gsyngl2), and

  • If Vk is projecting to Vj, Cov[Vk, Vj] is O(gsyngl2).

  • If Vk is not projecting to Vj, Cov[Vk, Vj] is O(gsyn2gl3).

Therefore, the dominant term is Cov[Vi, Vj], and Cov[dVidt,Vj] is O(gsyngl). From lemma 3,

Cov[Vi,dVjdt]=-Cov[dVidt,Vj]. (A.31)

Therefore, Cov[Vi,dVjdt] is O(gsyngl).

Theorem 4

The differential covariance of two neurons that are not physically connected is O(gsyn3gl3).

Proof

First, we define the antecedents of neurons i and j. Shown in Figure 16, {pre} is the set of common antecedents of neuron i and neuron j. {prep} is the set of antecedents of p ∈ {pre}. {prei} is the set of exclusive antecedents of neuron i. {pre j} is the set of exclusive antecedents of neuron j.

Figure 16.

Figure 16

The network used in the proof of theorem 4.

From lemma 2, we have, for any neuron p ∈ {pre},

Cov[Vi,Vp]=-(k{prei}gkiCov[Vk,Vp]+k{pre}gkiCov[Vk,Vp]+k{prep}gkpCov[Vk,Vi])/2gl, (A.32)
Cov[Vj,Vp]=-(k{prej}gkjCov[Vk,Vp]+k{pre}gkjCov[Vk,Vp]+k{prep}gkpCov[Vk,Vj])/2gl. (A.33)

For simplicity, we define

k{prej}gkjCov[Vk,Vp]=Cp,k{prei}gkiCov[Vk,Vp]=Dp,k{prep}gkpCov[Vk,Vj]=Ep,k{prep}gkpCov[Vk,Vi]=Fp. (A.34)

From lemma 2, we also have

Cov[Vi,Vj]=-(k{prei}gkiCov[Vk,Vj]+k{prej}gkjCov[Vk,Vi]+k{pre}gkiCov[Vk,Vj]+k{pre}gkjCov[Vk,Vi])/2gl. (A.35)

For simplicity, we define

k{prei}gkiCov[Vk,Vj]=A,k{prej}gkjCov[Vk,Vi]=B. (A.36)

Plug in equations A.32 and A.33 to A.35, and we have

Cov[Vi,Vj]=p{pre}gpi4gl2(k{pre}gkjCov[Vk,Vp]+Cp+Ep)+p{pre}gpj4gl2(k{pre}gkiCov[Vk,Vp]+Dp+Fp)-A2gl-B2gl. (A.37)

Now we look at the differential covariance between neuron i and neuron j:

Cov[dVidt,Vj]=k{prei}gkiCov[Vk,Vj]+k{pre}gkiCov[Vk,Vj]+glCov[Vi,Vj]. (A.38)

Plug in equations A.33 and A.37 and we have:

Cov[dVidt,Vj]=A+p{pre}-gpi2gl(k{pre}gkjCov[Vk,Vp]+Cp+Ep)+gl[p{pre}gpi4gl2(k{pre}gkjCov[Vk,Vp]+Cp+Ep)+p{pre}gpj4gl2(k{pre}gkiCov[Vk,Vp]+Dp+Fp)-A2gl-B2gl]=A2-B2+p{pre}gpj4gl(Dp+Fp)-p{pre}gpi4gl(Cp+Ep). (A.39)

Note:

  • There is no physical connection between a neuron in {prei} and neuron j; otherwise, this neuron belongs to {pre}. Therefore, from theorem 2, A is O(gsyn2gl3)gki=O(gsyn3gl3).

  • There is no physical connection between a neuron in {prej} and neuron i; otherwise, this neuron belongs to {pre}. Therefore, from theorem 2, B is O(gsyn2gl3)gkj=O(gsyn3gl3).

  • There could be physical connections between neurons in {prep} and {prej}, so Cp is O(gsyngl2)gkj=O(gsyn2gl2).

  • There could be physical connections between neurons in {prep} and {prei}, so Dp is O(gsyngl2)gki=O(gsyn2gl2).

  • There could be physical connections between neurons in {prep} and neuron j, so Ep is O(gsyngl2)gkp=O(gsyn2gl2).

  • There could be physical connections between neurons in {prep} and neuron i, so Fp is O(gsyngl2)gkp=O(gsyn2gl2).

Therefore,

Cov[dVidt,Vj]=O(gsyn3gl3)-O(gsyn3gl3)+p{pre}k{pre}gpj4gl(O(gsyn2gl2)+O(gsyn2gl2))-p{pre}k{pre}gpi4gl(O(gsyn2gl2)+O(gsyn2gl2))=O(gsyn3gl3). (A.40)

From lemma 3, we know that Cov[Vi,dVjdt] is also O(gsyn3gl3).

Theorem 5

Type 1 false connection’s strength is reduced in differential covariance.

Proof

From theorems 1 and 2, we know that in the correlation method, the strength of a nonphysical connection ( O(gsyngl2)) is gsyngl times that of a physical connection ( O(gsyn2gl3)).

From theorems 3 and 4, we know that in the differential covariance method, the strength of a nonphysical connection ( O(gsyngl)) is gsyn2gl2 times that of a physical connection ( O(gsyn3gl3)).

Because gsyngl, the relative strength of the nonphysical connections is reduced in the differential covariance method.

A.3 Directionality Information in Differential Covariance

Theorem 6

If neuron i projects to neuron j with an excitatory connection, Cov[dVidt,Vj]>0 and Cov[Vi,dVjdt]<0.

Proof

Given the model above, similar to theorem 4, from lemma 2, we have, for any neuron p ∈ {pre}:

Cov[Vi,Vp]=-(k{prei}gkiCov[Vk,Vp]+k{pre}gkiCov[Vk,Vp]+k{prep}gkpCov[Vk,Vi])/2gl, (A.41)
Cov[Vj,Vp]=-(k{prej}gkjCov[Vk,Vp]+k{pre}gkjCov[Vk,Vp]+k{prep}gkpCov[Vk,Vj]+gijCov[Vi,Vp])/2gl. (A.42)

For simplicity, we define

k{prej}gkjCov[Vk,Vp]=Cp,k{prei}gkiCov[Vk,Vp]=Dp,k{prep}gkpCov[Vk,Vj]=Ep,k{prep}gkpCov[Vk,Vi]=Fp. (A.43)

From lemma 2, we also have

Cov[Vi,Vj]=-(k{prei}gkiCov[Vk,Vj]+gijCov[Vi,Vi]+k{prej}gkjCov[Vk,Vi]+k{pre}gkiCov[Vk,Vj]+k{pre}gkjCov[Vk,Vi])/2gl (A.44)

For simplicity, we define

k{prei}gkiCov[Vk,Vj]=A,k{prej}gkjCov[Vk,Vi]=B. (A.45)

Plug in A, B, Cp, Dp, Ep, and Fp:

Cov[Vi,Vj]=p{pre}gpi4gl2(k{pre}gkjCov[Vk,Vp]+Cp+Ep+gijCov[Vi,Vp])+p{pre}gpj4gl2(k{pre}gkiCov[Vk,Vp]+Dp+Fp)-A2gl-B2gl-gijCov[Vi,Vi]2gl. (A.46)

Now we look at the differential covariance between neuron i and neuron j:

Cov[dVidt,Vj]=A2-B2+p{pre}gpj4gl(Dp+Fp)-p{pre}gpi4gl(Cp+Ep++gijCov[Vi,Vp])-gijCov[Vi,Vi]2. (A.47)

Note that in theorem 4, we already proved the scale of A, B, Cp, Dp, Ep, Fp. Also:

  • There are physical connections between neurons in {pre} and neuron i, so gjCov[Vi, Vp] is O(gsyngl2)gkp=O(gsyn2gl2).

  • From lemma 1, the autocovariance of neuron i is O(1gl). So gjCov[Vi, Vi] is O(1gl)gij=O(gsyngl).

Therefore, gijCov[Vi, Vi] is the dominant term in Cov[dVidt,Vj]. Since Cov[Vi, Vi] > 0, for excitatory connection gij > 0, Cov[dVidt,Vj]<0.

From lemma 3,

Cov[Vi,dVjdt]=-Cov[dVidt,Vj]. (A.48)

Therefore, Cov[Vi,dVjdt]>0.

Corollary 1

If neuron i projects to neuron j with an inhibitory connection, Cov[dVidt,Vj]>0 and Cov[Vi,dVjdt]<0.

Proof

The proof is similar to theorem 6. Again, we know gijCov[Vi, Vi] is the dominant term in Cov[dVidt,Vj]. Since Cov[Vi, Vi] > 0, for an inhibitory connection gij < 0, Cov[dVidt,Vj]>0.

From lemma 3,

Cov[Vi,dVjdt]=-Cov[dVidt,Vj]. (A.49)

Therefore, Cov[Vi,dVjdt]<0.

Appendix B: Benchmarked Methods

We compared our methods to a few popular methods.

B.1 Covariance Method

The covariance matrix is defined as

COVx,y=1Ni=1N(xi-μx)(yi-μy), (B.1)

where x and y are two variables and μx and μy are their population mean.

B.2 Precision Matrix

The precision matrix is the inverse of the covariance matrix:

P=COV-1, (B.2)

It can be considered one kind of partial correlation. Here we briefly review this derivation because we use it to develop our new method. The derivation is based on and adapted from Cox and Wermuth (1996).

We begin by considering a pair of variables (x, y) and remove the correlation in them introduced from a control variable z.

First, we define the covariance matrix as

COVxyz=[σxxσxyσxzσyxσyyσyzσzxσzyσzz]. (B.3)

By solving the linear regression problem,

wx=argminwE(x-wz)2,wy=argminwE(y-wz)2. (B.4)

we have

wx=σxzσzz-1,wy=σyzσzz-1. (B.5)

Then we define the residual of x, y as

rx=x-wxz,ry=y-wyz. (B.6)

Therefore, the covariance of rx, ry is

COVrx,ry=σxy-σxzσzz-1σyz. (B.7)

If we define the precision matrix as

Pxyz=[pxxpxypxzpyxpyypyzpzxpzypzz], (B.8)

using Cramer’s rule, we have

pxy=-|σxyσxzσzyσzz|COVxyz. (B.9)

Therefore,

pxy=-σzzCOVxyz(σxyz-σxzσzz-1σyz). (B.10)

So pxy and COVrx,ry differ by a ratio of -σzzCOVxyz.

B.3 Sparse Latent Regularization

Prior studies (Banerjee, Ghaoui, d’Aspremont, & Natsoulis, 2006; Friedman, Hastie, & Tibshirani, 2008) have shown that regularizations can provide better a estimation if the ground-truth connection matrix has a known structure (e.g., sparse). For all data tested in this letter, the sparse latent regularization (Yatsenko et al., 2015) worked best. For a fair comparison, we applied the sparse latent regularization to both the precision matrix method and our differential covariance method.

In the original sparse latent regularization method, people made the assumption that a larger precision matrix S is the joint distribution of the p observed neurons and d latent neurons (Yatsenko et al., 2015):

S=(S11S12S21S22),

where S11 corresponds to the observable neurons. If we can measure only the observable neurons, the partial correlation computed from the observed neural signals is

Cob-1=Sob=S11-S12·S22-1·S21 (B.11)

because the invisible latent neurons as shown in equation 2.2 introduce correlations into the measurable system. We denote this correlation introduced from the latent inputs as

L=S12·S22-1·S21. (B.12)

If we can make the assumption that the connections between the visible neurons are sparse (S11 is sparse and the number of latent neurons is much smaller than the number of visible neurons, that is, dp), then prior work (Chandrasekaran et al., 2011) has shown that if Sob is known, S11 is sparse enough and L’s rank is low enough (within the bound defined in Chandrasekaran et al., 2011); then the solution of

S11-L=Sob (B.13)

is uniquely defined and can be solved by the following convex optimization problem,

argminS11,L||S11||1+αtr(L), (B.14)

under the constraint that

Sob=S11-L (B.15)

Here, || ||1 is the L1-norm of a matrix, and tr() is the trace of a matrix. α is the penalty ratio between the L1-norm of S11 and the trace of L and is set to 1/N for all our estimations.

However, this method is used to regularize precision matrix. For our differential covariance estimation, we need to make small changes to the derivation. Note that if we assume that the neural signals of the latent neurons are known and let l be the indexes of these latent neurons, from section 2.2.2,

ΔSi,j=ΔPi,j-COVj,l·COVl,l-1·ΔCi,lT (B.16)

removes the Vlatent terms in equation 2.2.

Even if l is unknown,

COVj,l·COVl,l-1·ΔCi,lT

is low rank because it is bounded by the dimensionality of COVl,l, which is d. And ΔS is the internal connection between the visible neurons, which should be a sparse matrix. Therefore, letting

Sob=ΔP,S11=ΔS,L=-COVj,l·COVl,l-1·ΔCi,lT, (B.17)

we can use the original sparse+latent method to solve for ΔS. In this article, we used the inexact robust PCA algorithm (http://perception.csl.illinois.edu/matrix-rank/sample_code.html) to solve this problem (Lin, Liu, & Su, 2011).

B.4 The Generalized Linear Model Method

As summarized by Roudi et al. (2015), GLMs assume that every neuron spikes at a time-varying rate that depends on earlier spikes (both those of other neurons and its own) and on external covariates (such as a stimulus or other quantities measured in the experiment). As they explained, the influence of earlier spikes on the firing probability at a given time is assumed to depend on the time since they occurred. For each prepostsynaptic pair i, j, it is described by a function Ji j (τ) of this time lag (Roudi et al., 2015). In this article, we average this temporal dependent function Ji j (τ) over time to obtain the functional connectivity estimation of this method.

The spike trains used for the GLM method were computed using the spike detection algorithm from Quiroga, Nadasdy, and Ben-Shaul (2004) (https://github.com/csn-le/wave_clus). The article’s default parameter set is used except that the maximum detection threshold is increased. The code is provided in our github repository (https://github.com/tigerwlin/diffCov/blob/master/spDetect.m) for reference. The GLM code was obtained from Pillow et al. (2008) (http://pillowlab.princeton.edu/code_GLM.html) and applied on the spike trains.

Appendix C: Details about the Thalamocortical Model

C.1 Intrinsic Currents

For the thalamocortical model, a conductance-based formulation was used for all neurons. The cortical neuron consisted of two compartments, dendritic and axo-somatic compartments, similar to previous studies (Bazhenov et al., 2002; Chen et al., 2012; Bonjean et al., 2011) and is described by the following equations,

CmdVDdt=-IdK-leak-Idleak-IdNa-IdNap-IdCa-IdKm-Isyn,gcs(VS-VD)=-ISNa-ISK-ISNap, (C.1)

where the subscripts s and d correspond to axo-somatic and dendritic compartment, Ileak is the Cl leak currents, INa is fast Na+ channels, INap is persistent sodium current, IK is fast delayed rectifier K+ current, IKm is slow voltage-dependent noninactivating K+ current, IKCa is slow Ca2+-dependent K+ current, ICa is high-threshold Ca2+ current, Ih is hyperpolarization-activated depolarizing current, and Isyn is the sum of synaptic currents to the neuron. All intrinsic currents were of the form g(VE), where g is the conductance, V is the voltage of the corresponding compartment, and E is the reversal potential. The detailed descriptions of individual currents are provided in previous publications (Bazhenov et al., 2002; Chen et al., 2012). The conductances of the leak currents were 0.007 mS/cm2 for IdK-leak and 0.023 mS/cm2 for Idleak. The maximal conductances for different currents were IdNap:2.0mS/cm2;IdNa:0.8mS/cm2;IdKm:0.012mS/cm2;IdKCa:0.015mS/cm2;IdKm:0.012mS/cm2;IsNa:3000mS/cm2;IsK:200mS/cm2; and IsNap:15mS/cm2. Cm was 0.075μF/cm2.

The following describes the IN neurons:

CmdVDdt=-IdK-leak-Idleak-IdNa-IdCa-IdKCa-IdKm-Isyngcs(VS-VD)=-ISNa-ISK. (C.2)

The conductances for leak currents for IN neurons were 0.034 mS/cm2 for IdK-leak and 0.006 mS/cm2 for Idleak. Maximal conductances for other currents were IdNa:0.8mS/cm2;IdCa:0.012mS/cm2;IdKCa:0.015mS/cm2;IdKm:0.012mS/cm2;IsNa:2500mS/cm2 and IsK:200mS/cm2.

The TC neurons consisted of only single compartment and are described as follows:

dVDdt=-IK-leak-Ileak-INa-IK-ILCa-Ih-Isyn. (C.3)

The conductances of leak currents were Ileak: 0.01 mS/cm2; IKleak: 0.007 mS/cm2; The maximal conductances for other currents were: fast Na+ (INa) current: 90 mS/cm2; fast K+ (Ik) current: 10 mS/cm2; low-threshold Ca2+(ILCa) current: 2.5 mS/cm2; and hyperpolarization-activated depolarizing current (Ih): 0.015 mS/cm2.

The RE cells were also modeled as a single compartment neuron:

dVDdt=-IK-leak-Ileak-INa-IK-ILCa-Ih-Isyn. (C.4)

The conductances for leak currents were Ileak: 0.05 mS/cm2 and IKleak: 0.016 mS/cm2. The maximal conductances for other currents were fast Na+ (INa) current: 100 mS/cm2; fast K+ (IK) current: 10 mS/cm2; and low-threshold Ca2+(ILCa) current: 2.2 mS/cm2.

C.2 Synaptic Currents

GABA-A, NMDA and AMPA synaptic currents were described by first-order activation schemes (Timofeev, Grenier, Bazhenov, Sejnowski, & Steriade, 2000). The equations for all synaptic currents used in this model are given in our previous publications (Bazhenov et al., 2002; Chen et al., 2012). We mention only the relevant equations:

IsynAMPA=gsyn[O](V-EAMPA),IsynNMDA=gsyn[O](V-ENMDA),IsynGABA=gsyn[O](V-EGABA). (C.5)

Contributor Information

Tiger W. Lin, Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92092, U.S.A

Anup Das, Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Jacobs School of Engineering, University of California San Diego, La Jolla, CA 92092, U.S.A.

Giri P. Krishnan, Department of Medicine, University of California San Diego, La Jolla, CA 92092, U.S.A

Maxim Bazhenov, Department of Medicine, University of California San Diego, La Jolla, CA 92092, U.S.A.

Terrence J. Sejnowski, Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Institute for Neural Computation, University of California San Diego, La Jolla, CA 92092, U.S.A

References

  1. Banerjee O, Ghaoui LE, d’Aspremont A, Natsoulis G. Convex optimization techniques for fitting sparse gaussian graphical models. Proceedings of the 23rd International Conference on Machine Learning; New York: ACM; 2006. pp. 89–96. [Google Scholar]
  2. Battistin C, Hertz J, Tyrcha J, Roudi Y. Belief propagation and replicas for inference and learning in a kinetic Ising model with hidden spins. Journal of Statistical Mechanics: Theory and Experiment. 2015;2015(5):P05021. [Google Scholar]
  3. Bazhenov M, Timofeev I, Steriade M, Sejnowski TJ. Model of thalamocortical slow-wave sleep oscillations and transitions to activated states. Journal of Neuroscience. 2002;22(19):8691–8704. doi: 10.1523/JNEUROSCI.22-19-08691.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bonjean M, Baker T, Lemieux M, Timofeev I, Sejnowski T, Bazhenov M. Corticothalamic feedback controls sleep spindle duration in vivo. Journal of Neuroscience. 2011;31(25):9124–9134. doi: 10.1523/JNEUROSCI.0077-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Burkitt AN. A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biological Cybernetics. 2006;95(1):1–19. doi: 10.1007/s00422-006-0068-6. [DOI] [PubMed] [Google Scholar]
  6. Capone C, Filosa C, Gigante G, Ricci-Tersenghi F, Del Giudice P. Inferring synaptic structure in presence of neural interaction time scales. PloS One. 2015;10(3):e0118412. doi: 10.1371/journal.pone.0118412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Chandrasekaran V, Sanghavi S, Parrilo PA, Willsky AS. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization. 2011;21(2):572–596. [Google Scholar]
  8. Chen JY, Chauvette S, Skorheim S, Timofeev I, Bazhenov M. Interneuron-mediated inhibition synchronizes neuronal activity during slow oscillation. Journal of Physiology. 2012;590(16):3987–4010. doi: 10.1113/jphysiol.2012.227462. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Cox DR, Wermuth N. Multivariate dependencies: Models, analysis and interpretation. Boca Raton, FL: CRC Press; 1996. [Google Scholar]
  10. Destexhe A. Spike-and-wave oscillations based on the properties of GABAB receptors. Journal of Neuroscience. 1998;18(21):9099–9111. doi: 10.1523/JNEUROSCI.18-21-09099.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Dunn B, Roudi Y. Learning and inference in a nonequilibrium Ising model with hidden nodes. Physical Review E. 2013;87(2):022127. doi: 10.1103/PhysRevE.87.022127. [DOI] [PubMed] [Google Scholar]
  12. Fan H, Shan X, Yuan J, Ren Y. Covariances of linear stochastic differential equations for analyzing computer networks. Tsinghua Science and Technology. 2011;16(3):264–271. [Google Scholar]
  13. Friedman J, Hastie T, Tibshirani R. Sparse inverse covariance estimation with the graphical lasso. Biostatistics. 2008;9(3):432–441. doi: 10.1093/biostatistics/kxm045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Friston KJ. Functional and effective connectivity: Areview. Brain Connectivity. 2011;1(1):13–36. doi: 10.1089/brain.2011.0008. [DOI] [PubMed] [Google Scholar]
  15. Harrison L, Penny WD, Friston K. Multivariate autoregressive modeling of FMRI time series. NeuroImage. 2003;19(4):1477–1491. doi: 10.1016/s1053-8119(03)00160-5. [DOI] [PubMed] [Google Scholar]
  16. Huang H. Sparse Hopfield network reconstruction with 1 regularization. European Physical Journal B. 2013;86(11):1–7. [Google Scholar]
  17. Kandel ER, Markram H, Matthews PM, Yuste R, Koch C. Neuroscience thinks big (and collaboratively) Nature Reviews Neuroscience. 2013;14(9):659–664. doi: 10.1038/nrn3578. [DOI] [PubMed] [Google Scholar]
  18. Lánskỳ P, Rospars JP. Ornstein-Uhlenbeck model neuron revisited. Biological Cybernetics. 1995;72(5):397–406. [Google Scholar]
  19. Lin Z, Liu R, Su Z. Linearized alternating direction method with adaptive penalty for low-rank representation. In: Shawe-Taylor J, Zemel RS, Bartlett PL, Pereira F, Weinberger KQ, editors. Advances in neural information processing systems. Vol. 24. Red Hook, NY: Curran; 2011. pp. 612–620. [Google Scholar]
  20. McIntosh A, Gonzalez-Lima F. Structural modeling of functional neural pathways mapped with 2-deoxyglucose: Effects of acoustic startle habituation on the auditory system. Brain Research. 1991;547(2):295–302. doi: 10.1016/0006-8993(91)90974-z. [DOI] [PubMed] [Google Scholar]
  21. Nunez P, Srinivasan R. Electric fields of the brain. New York: Oxford University Press; 2005. [Google Scholar]
  22. Okatan M, Wilson MA, Brown EN. Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. Neural Computation. 2005;17(9):1927–1961. doi: 10.1162/0899766054322973. [DOI] [PubMed] [Google Scholar]
  23. Pernice V, Rotter S. Reconstruction of sparse connectivity in neural networks from spike train covariances. Journal of Statistical Mechanics: Theory and Experiment. 2013;2013(3):P03008. [Google Scholar]
  24. Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky E, Simoncelli EP. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature. 2008;454(7207):995–999. doi: 10.1038/nature07140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Quan T, Liu X, Lv X, Chen WR, Zeng S. Method to reconstruct neuronal action potential train from two-photon calcium imaging. Journal of Biomedical Optics. 2010;15(6):066002. doi: 10.1117/1.3505021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Quiroga RQ, Nadasdy Z, Ben-Shaul Y. Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural Computation. 2004;16(8):1661–1687. doi: 10.1162/089976604774201631. [DOI] [PubMed] [Google Scholar]
  27. Rahmati V, Kirmse K, Marković D, Holthoff K, Kiebel SJ. Inferring neuronal dynamics from calcium imaging data using biophysical models and Bayesian inference. PLoS Comput Biol. 2016;12(2):e1004736. doi: 10.1371/journal.pcbi.1004736. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Ricciardi LM, Sacerdote L. The Ornstein-Uhlenbeck process as a model for neuronal activity. Biological Cybernetics. 1979;35(1):1–9. doi: 10.1007/BF01845839. [DOI] [PubMed] [Google Scholar]
  29. Roudi Y, Dunn B, Hertz J. Multi-neuronal activity and functional connectivity in cell assemblies. Current Opinion in Neurobiology. 2015;32:38–44. doi: 10.1016/j.conb.2014.10.011. [DOI] [PubMed] [Google Scholar]
  30. Roudi Y, Hertz J. Mean field theory for nonequilibrium network reconstruction. Physical Review Letters. 2011;106(4):048702. doi: 10.1103/PhysRevLett.106.048702. [DOI] [PubMed] [Google Scholar]
  31. Schneidman E, Berry MJ, Segev R, Bialek W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature. 2006;440(7087):1007–1012. doi: 10.1038/nature04701. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Stetter O, Battaglia D, Soriano J, Geisel T. Model-free reconstruction of excitatory neuronal connectivity from calcium imaging signals. PLoS Comput Biol. 2012;8(8):e1002653. doi: 10.1371/journal.pcbi.1002653. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Stevenson IH, Kording KP. How advances in neural recording affect data analysis. Nature Neuroscience. 2011;14(2):139–142. doi: 10.1038/nn.2731. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Stevenson IH, Rebesco JM, Miller LE, Körding KP. Inferring functional connections between neurons. Current Opinion in Neurobiology. 2008;18(6):582–588. doi: 10.1016/j.conb.2008.11.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Timofeev I, Grenier F, Bazhenov M, Sejnowski T, Steriade M. Origin of slow cortical oscillations in deafferented cortical slabs. Cerebral Cortex. 2000;10(12):1185–1199. doi: 10.1093/cercor/10.12.1185. [DOI] [PubMed] [Google Scholar]
  36. Truccolo W, Eden UT, Fellows MR, Donoghue JP, Brown EN. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. Journal of Neurophysiology. 2005;93(2):1074–1089. doi: 10.1152/jn.00697.2004. [DOI] [PubMed] [Google Scholar]
  37. Uhlenbeck GE, Ornstein LS. On the theory of the Brownian motion. Physical Review. 1930;36(5):823. [Google Scholar]
  38. Vogelstein JT, Watson BO, Packer AM, Yuste R, Jedynak B, Paninski L. Spike inference from calcium imaging using sequential Monte Carlo methods. Biophysical Journal. 2009;97(2):636–655. doi: 10.1016/j.bpj.2008.08.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Winterhalder M, Schelter B, Hesse W, Schwab K, Leistritz L, Klan D, … Witte H. Comparison of linear signal processing techniques to infer directed interactions in multivariate neural systems. Signal Processing. 2005;85(11):2137–2160. [Google Scholar]
  40. Yatsenko D, Josić K, Ecker AS, Froudarakis E, Cotton RJ, Tolias AS. Improved estimation and interpretation of correlations in neural circuits. PLoS Computational Biology. 2015;2:199–207. doi: 10.1371/journal.pcbi.1004083. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

RESOURCES