Abstract
Estimating dynamic functional network connectivity (dFNC) of the brain from functional magnetic resonance imaging (fMRI) data can reveal both spatial and temporal organization and can be applied to track the developmental trajectory of brain maturity as well as to study mental illness. Resting state fMRI (rs-fMRI) is regarded as a promising task since it reflects the spontaneous brain activity without an external stimulus. The sliding window method has been successfully used to extract dFNC but typically assumes a fixed window size. The hidden Markov model (HMM) based method is an alternative approach for estimating time-varying connectivity. In this paper, we propose a sparse HMM based on Gaussian HMM and Gaussian graphical model (GGM). In this model, the time-varying neural processes are represented as discrete brain states which are described with functional connectivity networks. By enforcing the sparsity on the precision matrix, we can get interpretable connectivity between different functional regions. The optimization of our model can be realized with the expectation maximization (EM) and graphical least absolute shrinkage and selection operator (glasso) algorithms. The proposed model is validated on both simulated blood oxygenation-level dependent (BOLD) time series and rs-fMRI data. Results indicate that the proposed model can capture both stationary and abrupt brain activity fluctuations. We also compare dFNC patterns between children and young adults from the Philadelphia Neurodevelopmental Cohort (PNC) study. Both spatial and temporal behavior of the dFNC are analyzed and compared. The results provide insight into the developmental trajectory across childhood and motivate further research on brain connectivity.
Index Terms—: resting state fMRI, dynamic functional connectivity, hidden Markov model, sparsity, brain development
I. INTRODUCTION
Recent progress in functional network connectivity (FNC) analysis from fMRI time series (i.e., functional connectivity between coherent brain networks) has enabled a better understanding of human brains. The combined power of brain imaging data and statistical models have provided a window into the spatiotemporal organization of both healthy and diseased brains [1], [2]. For example, Zhi et al. [3] explored the FNCs of individuals with major depressive disorder (MDD) and found abnormal time-varying brain activity and the corresponding network disruptions. Lacy et al. [4] showed that the FNCs could capture the motifs associated with group effects from the diagnosis of attention-deficit/hyperactivity disorder (ADHD), involving key differences in the default mode network (DMN). There are also various FNC related differences revealed in other clinical studies [5], [6], [7], [8], [9] (i.e. schizophrenia and bipolar disorder as well as drug and alcohol abuse). In addition, research has shown changes in FNCs as the brain matures for both task and resting scenarios, such as examining the different inter-regional connectivity of occipital and temporal regions during reading [10], including distinct DMN and cognitive control system connectivity in the resting state [11]. These networks have also been shown to be quite reliable across studies [12].
As research showed that the the brain is intrinsically organized into dynamic functional networks [13], [14], [15], the study of dynamic functional network connectivity (dFNC) began to attract increasing interest. For example, some temporal characteristics like the recurring patterns of brain FNCs [16], [17], [18], [19] and hierarchically organized brain states, can only be identified via a dynamic analysis. More importantly, in several clinical and developmental studies, brain images disclose more details on mental illness and brain development through a dynamic approach [6], [20], [21].
The sliding window approach is the most popular method applied in the dFNC analysis [17], [21]. With a preset window width, one can estimate the brain connectivity along the time axis by sliding the window along the time series under a static assumption. The main limitation of this method is the choice of the window size: if the size is too small, a limited number of observations cannot guarantee a reliable connectivity estimation; if the size is too large, abrupt brain activity will be overlooked in the results. The hidden Markov model (HMM) is an appealing alternative approach for modeling dynamic connectivity. It assumes that brain activity has several discrete states characterized by different connectivity patterns. The time-courses come from distributions parameterized by those connectivity patterns. Although HMM is based on the assumption of the Markov property of connectivity patterns, it avoids the choice of the window size and hence offers a flexible way to track either stationary or abrupt brain connectivity changes. For example, D. Vidaurre et al. [22] revealed fast brain dynamics related to the task in which the sliding window method fails. Baker et al. [23] also applied HMM to group analysis, to discover the reoccurring brain connectivity at the population level. However, the direct application of HMM to brain fMRI time series is challenging due to the high dimensionality of brain images. To address this issue, researchers applied different methods. In [24] and [25], authors applied a strong structural prior on the connectivity matrix. In [22] and [26], researchers concatenated subjects data along the time domain and then applied the estimation. Motivated by the work in [27] and [28], in this paper, we modeled the brain dynamics using the HMM model with a sparse regularization. The regularized HMM enabled us to estimate the brain dynamics under a weaker assumption on the connectivity matrix. Moreover, the model can also be used at either the individual or population level. Based on the EM algorithm [29], we showed that the sparse regularization on the precision matrices can be solved with the famous graphical least absolute shrinkage and selection operator (glasso) approach [30]. To validate the sparse HMM, we applied that model on both simulated data and the rs-fMRI data from PNC study and compared its performance with the sliding window approach. Based on the model, we also did a group comparison study between children and young adults.
The rest of the paper is organized as follows. In the next section, we briefly discuss the generative model based on the sparse HMM, its optimization and the model selection. In Section 3 the performance of our proposed model is validated on both simulated data and real rs-fMRI time series from PNC study. The comparison study of two age groups are also included in this section. In Section 4, we give a brief description of our model and other related works. The results and potential problems are discussed in Section 5.
II. METHODS
In this section we will describe how to model the brain image time series using an HMM framework. Assuming we have subjects in a brain image data set, each of them has a dimension time series with samples (The number of samples may vary in real data but we can assume that without loss of generality). In fMRI data, can be the number of voxels, regions of interest (ROI), or components. Denote the time series data as with and the set of hidden state variables (or latent variables) as . Here the latent variables come from a discrete state set with size . At every time point, the time course is generated according to the same distribution decided by the time-varying hidden state. That is, the hidden states correspond to specific brain activity patterns which reappear in the time series. For the rth subject, the hidden state is denoted as , in which each vector is from 1 – of – coding scheme and the position index of 1 in the vector is the hidden state number. As an initialization, each state has an initial probability . During the time series, the hidden states are associated by a transition matrix , in which each refers to the probability of moving from state to state . Since the hidden variables satisfy the Markov property, this transition matrix is independent of time. For the subject, given the hidden state at time , the appearance of time course obeys a Gaussian distribution with , as mean and covariance matrix:
| (1) |
where is the determinant. In Figure 1, we show a schematic illustration of the Gaussian HMM. On the right is a sample of transition matrix between 3 states. All the are time-independent parameters in [0, 1]. Combining that with the transition probability, the probability likelihood of the subject is written as
| (2) |
Fig. 1.

Schematic illustration of the multi-variant Gaussian HMM. The left shows the states and observations of subject. The right shows the transition between states when .
Maximizing this probability gives the maximum likelihood estimation (MLE) of the model parameters and . The implementation is generally performed via EM algorithm [31], [32], which is described in Section 1 in supplementary material.
Although is defined as the covariance matrix of Gaussian distribution, its estimation often turns out to be a singular matrix due to insufficient observations. For example, the fMRI data we used has been preprocessed to reduce the data dimensionality from millions to hundreds. However, we still need to deal with this problem because the number of samples is only around 100. Since the EM algorithm is an iterative algorithm, the iteration even fails at the beginning with a singular . Another problem is the complexity of a dense covariance matrix. In previous research, investigators have used the covariance matrix or its inverse matrix (known as the precision matrix) indiscriminately to represent the functional network connectivity. But in either case, we want to reveal the most significant connectivity instead of a ”whole connectivity” between each of two functional regions. Sparsity has been proven to be an efficient way to overcome these issues. In previous work [17] and [21], they both constrained the brain connectivity with sparsity; authors also assumed the independence of different ROIs in their spatial-temporal modeling work [25]. We borrow the similar idea as in [27] and [28] and introduce the regularization on the precision matrix . If is denoted as the marginal posterior distribution of latent variables and is the conditional probability of , with the trade-off parameters , the objective log-sum maximization function in the M step turns out to be
| (3) |
where
| (4) |
This is the exact formulation of sparse graphical model. Hence the optimization of Equation(3) can be solved by the glasso algorithm as in [30]. Fortunately, the newly introduced term will not affect the Expectation step computation so we can plug it in the original EM iteration. The algorithm was summarized in Algorithm S1 in supplementary material. Once the EM algorithm converges and the estimation of , is obtained, we can further decode the sequence of hidden states via the Viterbi algorithm [33].
There are two parameters to be determined: the state number and the sparsity trade-off term . Although most of studies used Bayesian information criteria(BIC) or cross validation for the model selection, Figueiredo et al. [34] has demonstrated that in real and synthetic data, the mixture minimum description length (MMDL) was a better choice with a consideration of effective code lengths for each state in a mixture model. MMDL is defined by
| (5) |
where is the degrees of freedom of and . The is the number of non-zeroes in the precision matrix . The idea of MMDL can be explained with the minimum description length principle in [35]. The state number is determined by minimizing the MMDL on a given list of . For the model selection on , we employ the universal regularization technique in [27] and use the effective sample size to reweight each by multiplying . Then a universal choice of is , where is the last diagonal element in .
III. RESULTS
A. Analysis of simulated dataset
a), Simulation data generation.
We firstly validate the proposed sparse HMM (SPHMM) with two simulation data sets. The SimTB toolbox [36] was employed to generate the simulated fMRI time series.
For the first simulation data set, The BOLD signals were generated with 60 ROIs. For each ROI, the time series contained 120 time points and repetition time (TR) is 2 seconds. The state number is set to be 4 and each state consists of different permutations of 4 components. Each component includes 8 to 16 ROIs and is activated for different periods. A binary value was assigned to denote activation or inactivation of this component. The 4 components are highlighted in Figure 2(a) and their activation and deactivation activity along the time dimension are represented in Figure 2(b). For example, all of the 4 components are activated during the first 30 time points, while only the 4th component is activated during the last 30 time points. Figure 2 is an illustration of the first simulation data for one run.
Fig. 2.

Illustration of the first simulation data set: (a) 4 components activated in the time series(b) The activation and deactivation of 4 components along the time dimension
The experiment was run on 5 independent synthetic data sets (each containing 200 subjects) with the same groundtruth connectivity patterns. In the second simulation data set, we want to verify if the proposed HMM can track the fast transition of dFNCs. We used 3 states obtained from a sparse HMM estimation on the task fMRI data of PNC data. Then, the estimated HMM is regarded as the ground truth to generate simulation data. To embody the inter-subject variances, a small perturbation was added to the covariance matrix by adding a random matrix with scale factor 0.05. Finally, 200 subjects’ time courses are simulated with 200 discrete samples.
b), Simulation Results.
To evaluate the performance of the proposed sparse HMM method, we compare the estimated dynamic connectivity with the ground truth and the estimation from the sliding window approach. In the sliding window approach, we tested two different window lengths to illustrate its subtle effect (, 40) and the sparse regularization parameter was selected according to [17]. In the sparse HMM method, was determined by the universal choice. The details of the dynamic connectivity result are summarized in Figure S1 in the supplementary material.
To validate the simulation result, we further calculate the relative error as well as the Wasserstain distance(WD)(Eq 6) between the ground truth of connectivity and the results of those approaches ().The relative error is defined as
Table I shows the relative error and the WD of three aforementioned estimations to the ground truth. The value in bold indicates that the result is the best in that column. From the table, we can see the proposed model fits two out of four connectivity matrices better than the sliding window approach under both measures. For the rest of the two states, our method also got a comparable result.
TABLE I.
Relative Error of 4 Estimated Connectivity States
| State1 | State2 | State3 | State4 | |
|---|---|---|---|---|
| SPHMM | 0.3801(0.0690) | 0.3477(0.0521) | 0.3819(0.0540) | 0.5145(0.0987) |
| 0.4549(0.0225) | 0.3381(0.0401) | 0.4001(0.0851) | 0.7630(0.112) | |
| 0.6661(0.1030) | 0.3224(0.0530) | 0.6992(0.0923) | 0.9391(0.144) | |
| State1 | State2 | State3 | State4 | |
| SPHMM | 0.3107(0.0594) | 0.2961(0.0345) | 0.3413(0.0327) | 0.4167(0.0798) |
| 0.3800(0.0198) | 0.2717(0.0315) | 0.3165(0.0542) | 0.5496(0.0940) | |
| 0.5222(0.0860) | 0.3081(0.0374) | 0.4980(0.0903) | 0.6257(0.0836) |
Then we use the second simulation data to test if the SPHMM can reveal the fast transient brain states. To do so, we firstly find the time points whose state is accurately predicted by SPHMM and then calculate the ratio of these time points to the simulated time length. It shows that we can get a 95.32% ± 1.75% accuracy among the population even when we assume there is high variances among subjects. As an illustration, an example of the inferred state sequence and its ground truth is presented in Figure 3(a) and (b). In Figure 3(c), the ground truth mean activation (mean of the state’s Gaussian distribution) and functional connectivity (off-diagonal elements of the state covariance matrices) are plotted against the estimation from SPHMM. In both figures, each color represents a different state and each dot represents an ROI (if showing the mean) or a pair of ROIs (if showing functional connectivity). Both of these results demonstrate that the proposed SPHMM method can recover both the ground truth state sequence and the underneath Gaussian distribution.
Fig. 3.

Results for the second simulation data. (a), (b). Ground truth and inferred state time courses for one subject, using the multi-variant mixture Gaussian observation model with high inter-subject variability. Time intervals with significant difference are marked with red rectangles. For most of the time points, the proposed model is able to decode the ground truth state accurately. (c) Mean activation (mean of the Gaussian distribution) and functional connectivity (off-diagonal elements of the covariance matrix) of the ground truth model against the SPHMM estimations; each color represents a different state, and each dot represents either a region if showing the mean or a pair of regions if showing functional connectivity.
B. Real data analysis
1). Data acquisition and preprocessing:
The Philadelphia Neurodevelopmental Cohort (PNC) is a large-scale collaborative study between the Brain Behaviour Laboratory at the University of Pennsylvania and the Children’s Hospital of Philadelphia [37]. The data is available in the dbGaP database, which contains (among other data modalities) rest fMRI data from nearly 1000 adolescents of age 8 to 21 years. In our study, 840 subjects(451 female and 389 male) were selected with age and gender labels. TR of the resting fMRI data is and the voxel matrix size is , with voxel size= . The 6′18” scanning session contains 124 time points. Standard preprocessing steps were applied using SPM12, including motion correction, spatial normalization to standard MNI space (adult template) and spatial smoothing with a FWHM Gaussian kernel. The influence of motion (6 parameters) was further addressed using a regression procedure, and the functional time series were band-pass filtered using a to frequency range.
Then the preprocessed data were decomposed into functional networks by Group Independent Component Analysis (GICA) [38], [39] as implemented in the GIFT toolbox. Following a similar pipeline as in [17], [40] and [41], we first performed a PCA on each subject to reduce the subject-specific dimension to 120. The PCA reduced data of each subject were concatenated in the time dimension and then were passed through another PCA, reducing the group level dimension to 100. After that, ICA was run and repeated 10 times using the infomax algorithm. The multi-run results were then clustered by ICASSO and the cluster centroids were used as the stable estimation. To further estimate the subject-specific spatial maps and time courses, the GIG-ICA approach [42], [43] was used for back-reconstruction. Finally, we made a selection of the 100 components according to several considerations [43], [44]: removing the components whose peak coordinates (in MNI space) were spatially overlapped with white matter, ventricles, brain stem, or cerebellum according to visual observation; removing the components whose power were dominated by high frequency fluctuations according to the low frequency to high frequency ratio. After the screening, 50 out of 100 components were selected and labeled using the peak functional region. Those 50 components were then divided into 5 functional domains based on the literature [17] and [44]. These functional domains included the sensorimotor network (SM) with 10 components, default mode network (DMN) with 14 components, cognitive network (CCN) with 11 components, visual network (VIS) with 11 components, and auditory network (AUD) with 4 components. Figure 4 illustrates the spatial map of the chosen functional domains. After the GICA, the output time series of size (number of subjects × number of time points × number of ICA components = 840 × 124 × 50), were normalized so that for each subject and component of GICA, the mean is 0 and the standard deviation is 1.
Fig. 4.

Functional domain spatial maps. 50 selected components were visually inspected and grouped into 5 different functional domains. Each component is illustrated with different colors within each domain. For the components spread among several functional domains, we assigned them with the most conspicuous domain. Detailed anatomical information of the selected components can be found in Table S1 and Figure S2 in Supplementary Material.
To investigate the development of functional brain connectivity, the whole data set was divided into two age groups as suggested in [45]. Specifically, subjects whose age was over 200 months were grouped as young adults (age 18.53 ± 1.26 years, 177 females of 305 total subjects) while subjects whose age was under 160 months were grouped as children (11.13 ± 1.37 years, 142 females of 280 total subjects).
2). Postprocessing:
The sparse HMM was then applied to the preprocessed data to estimate the whole brain dynamics according to Algorithm 1. Since the proposed solution converges to a local optimum, we ran the algorithm 15 times to address this issue with different initialization. The setting of initialization follows a similar process as that in the literature [25]. In each run, we randomly sampled 25% of the subjects without replacement and applied mixture Gaussian clustering to the time courses with clusters centroids. The mean vectors and covariance matrices were used as the initial value in EM algorithm. The initial probability was initialized with equally appearing probability for each state. The transition probability matrix is initialized as a symmetric diagonal-dominated matrix with and , ().
The MMDL was then employed to select the best state number . The smaller the MMDL value is, the better the model will be for the state number setting. In the previous literatures [17], [22], [24] and [25], the reported number of brain states varies from 4 to 12. Hence we safely set the possible number of states ranging from 2 to 14. We repeated 25 runs for every between 2 and 14 and calculated the respective MMDL value. Then we averaged those values in 25 runs given and determined the best corresponding to the smallest average MMDL value. From the result shown in Figure 5, we can see the smallest MMDL value appears when is 5, which is admitted as the best choice of . In the subsequent analysis, we will set the number of states to 5 without further explanation.
Fig. 5.

(a) The selection of the parameter . The best is chosen with respect to the minimal MMDL. The y-axis has been log-scaled to illustrate the difference. (b) The WD of matched five states and the average WD of randomly matched states.
Another issue with the proposed model is that we must decide if the brain dynamics we get from multiple runs can be matched with each other given the state number . In previous studies, researchers used the correlation between the activation map or the connectivity matrix to match states [25], [26]. However, recall that the states are defined with different multi-variant Gaussians, so a more proper way is to find a similarity measure between different Gaussian distributions. Hence we use the Wasserstein distance to measure the similarity between two distributions instead of comparing parameters. For example, Mueller et al. [46] have discussed this issue and its usefulness in a gene expression study. In the literature [47], the authors have shown that for two multi-variant Gaussian , the 2nd order WD is defined with a closed form
| (6) |
Given the state number , we calculated the WD between any two states from different runs and applied an iterative approach similar to the work in [25], [48] to match the brain states.
To further validate the reproducibility of our model, we randomly divided the data set into two halves without overlap, ran the proposed model separately on two data sets and compared the results. In our experiment, we repeated this procedure 25 times to show that similar results can be reproduced on two halves of data.Additionally, we investigated how the number of subjects influences the estimation results. We found that with enough (over 20% of 840) subjects, we can get consistent results among the population. The details of this result is illustrated in Figure S3 in the Supplementary Material.
Once we estimated the parameters of the sparse HMM, we ran the Viterbi algorithm [33] to decode the sequences of brain states for each subject. The sequence of brain states, as a temporal feature for each subject, was then used to investigate the temporal property in the whole population as well as in different age groups.
In the two age groups studied, the comparison on the temporal feature such as transition matrix and the fractional occupancy of each state can be compared statistically. For the comparison of dFNCs, the permutation test was employed. More specifically, we keep the group difference as the observed difference and shuffle the labels of age for the chosen two groups as a permutation. For each permutation, we get a random group partition and the proposed model is run separately on these two groups. This permutation procedure was repeated 25000 times and Bonferroni adjustment was then employed to correct for false positives. Finally, for each element of every FNC, the distribution of the group difference from random partition is compared to the observed difference. The adjusted value is recorded and explained as to what extent the group difference from random partition is more extreme than the observed group difference. During this test, the average running time is 432.4s for each permutation using a standard work-station (endowed with four Intel Xeon CPU E5-2630 v3 3.30 GHz processors.)
3). Results:
We first tested the reproducibility of our method. Figure 5(b) shows the matching results of the five states in 25 random half-half divisions and the states were sorted with a descending order of similarity. From the figure, we can tell that the matching of the first four states is consistent among four different splits-and-runs. This consistency was further validated with Student’s t-test which showed a significant difference (p < 10−4) between the WD of well-matched states and that of the randomly matched states.
Having validated the reproducibility of the proposed model, we investigated both the spatial and temporal property of the 5 brain FNCs. Figure 6 demonstrates one of the FNCs visualized with the BrainNet Viewer [49]. The complete FNCs of five states can be found in Figure S2 of supplementary material. There are several interesting observations from the five discovered FNC states. State 1 and state 2 are similar to each other except that state 1 contains stronger connections in AUD-AUD and AUD-SM network. Comparatively, states 3-5 have more distinct features. State 3 stands out with the least connection between the left and right hemispheres while most of the connectivity in state 5 are inter-hemispheric. In state 4, we can clearly observe a unique SM-SM network connectivity pattern.
Fig. 6.

One of the FNCs estimated by the sparse HMM. The nodes represent the peak coordinates of the Group ICA components in MNI space and are distinguished with different colors according to their functional domain. Concrete information about the nodes is omitted here due to the space limit, but readers can find more details in Table S1 in the supplementary material. The size of the node represents the node connectivity strength (NCS). For the node, its NCS is defined with the sum of the absolute value of the row in matrix . A greater NCS implies a stronger connection of this node to the other regions. Colors of the edge, in red or blue, indicate the correlation or anti-correlation of its two end nodes. The width of the edge indicates the connectivity strength between two end nodes. The connectivity is displayed from left to right as sagittal(left), axial and sagittal(right) view.
In addition, we considered several temporal properties of the brain dynamics in detail. Figure 7 shows the mean fractional occupancy(FO) for each state among the population along with the variances exhibited for different subjects. From the figure, we see that there was a significant variation of the FO between states, which implied that some states (like 1 or 2) were visited more often than the others. Also, the FO of subjects also varied from one to another. This motivated us to explore more about this temporal feature in different subgroups of the population. We will leave this discussion to the next experiment on age-related subgroups. The state-related variation as well as the subject-related variation were further confirmed based on similar results in the literature [25] and [26].
Fig. 7.

(a) The fractional occupancy (FO) of five states. (b) The transition matrix of five brain states. (c) The correlation matrix of FOs
Figure 7(b) displays the transition probability matrix of brain dynamics. We can check how the brain states switch between each other based on this matrix. For example, state 1 and state 4 rarely switch to each other while some transitions (like switching between states 2, 3 and 4) happen more frequently than others. We further checked the correlation of the FOs in the whole data set and the result is shown in Figure 7(c). Interestingly, the correlation matrix and the transition matrix are pretty similar to each other: states 2, 3, 4 and states 1, 5 formed into two clusters within which the states are strongly correlated with each other and states switch more often between intra-cluster states. In other words, the organization of the brain states showed some hierarchical structure. This discovery can also be considered as the confirmation of the result in previous research [22]. With the great variation shown in the FOs among the population, we further validated our model on the group data, comparing the developmental brain network connectivity in two age groups: children and young adults. The two subgroups are extracted from the PNC data as we mentioned in the preprocessing section.
The differences between young adults and children were first examined based on the temporal feature extracted from the sparse HMM. As we mentioned in the postprocessing section, we plot the means and standard variations of fractional occupancy at the group level in Figure 8. In the child group, the fractional occupancy, the state transition matrix as well as the correlation matrix are pretty similar to the result at the whole data set level. State 1 and state 5 are correlated to each other while anti-correlated with the rest of the states. In the young adult group, the case was different. In young adults, state 1 became more dominant and stable (with a smaller standard variation) while state 2 and 4 appear less frequently. The state 5 frequency increases greatly in young adults. The t-test result indicates a significant difference in the mean of FOs between two age groups with p < 0.001.
Fig. 8.

The temporal features in two age groups: (a)(d)the fractional occupancy, (b)(e) the transition probability matrix, (c)(f) the correlation matrix of FOs.
The switching between states is disparate in terms of the transition matrix, which is plotted in Figure 8(b)(e). Compared to the child group, the transition matrix is less symmetric and turns out to be less random as well. The rare appearance of state 3 and 4 makes themselves isolated from other states in young adults. Figure 8(c) and (f) show the correlation of the FO within each group. It should be noted that the states occurrence frequency of young adults shows a distinct pattern with the children and the whole population. Though state 3 occupies coordinate appearance in both groups, its interaction with the other states behaves distinctly. State 3 is correlated with state 1 and anti-correlated with state 2 in young adults while the correlation is reversed in children. In addition, the correlation between state 2, 3, 4 disappears as age increases. We can divided the 5 states into two meta-states according to this correlation. Meta-state 1 includes state 2, 3, 4 while meta-state 2 comprises state 1 and 5. This finding is consistent with [22], in which researchers found that the brain dynamics are hierarchically organized in time. The result also confirmed that even in a task-free scenario, the transition and the appearance of brain states are not random. Previous state constrains the future states. Since we found this organization disappears in young adults, it is well worth exploring the temporal organization of brain activity from a developmental view. Moreover, we also explored if we can discriminate age groups based on the distribution of FO. The two-dimensional representation of the FO was generated by the t-distributed Stochastic Neighbor Embedding(t-SNE) [50]. Since visualization results could be sensitive to the predefined distance, we chose four different distances to illustrate the separation. The other parameters in t-SNE are set as default values in MATLAB. From Figure S4 in supplementary material, we can see the well-clustered structure of the dimension-reduced FO in two age groups no matter how we chose the distance, which confirmed that the FO of subjects is informative.
To illustrate how the dFNC will change as age advances, we compared the discovered dFNCs between children and young adults. The comparison results of the dFNCs between two groups are demonstrated in Figure 9 (Details of two groups’ FNCs can be found in Figure S5 in supplementary material). We also use the results of the sliding window method as a benchmark to compare with our approach. We observe that two groups barely show connectivity differences in state 1. This finding is confirmed by both approaches. The group difference can be moderately observed in state 2 to 4. For example, the sparse HMM method has revealed significant difference in SM-VIS (state 2, 3), DMN-SM (state 3), which is comparable with the results got from the sliding window approach. State 5 shows distinct patterns between the two groups: young adults show stronger CN-CN, CN-SM, DMN-DMN connectivity relative to children. Several studies have shown that intersubnetwork connectivity or intra-subnetwork connectivity play a key role during higher cognitive functioning and emotional processing. Our finding also confirmed those results.
Fig. 9.

The comparison of five FNCs between children and young adults (young adults - children). Upper row: the permutation test result of the SPHMM. The Bonferroni adjustment was employed to further correct the false positives. Only the entry whose p value is less than(or equal to) is displayed in scale. Lower row: the two-sample t-test of the sliding window method(SLW). A two-sample t test was performed across subject median dynamic FCs by state, with 5% significance level and under the false discovery rate(FDR) correction(q < 0.001). The results are visualized by plotting the log of p value with the sign of t statistics,
From the group comparison, it should be noted that state 1 and state 2 share functional connectivity in both groups without much distinction. These two states occupy around 70% of the scanning time (75.3% in children and 68.4% in adults), which is consistent with the fractional occupancy in the whole data set. This implies that the FC networks do not change drastically but rather gradually with brain maturity. This result is also in line with the discovery of the previous research [21], [51]. However, for state 5, despite the spatial differences, there is also a significant difference in its fractional occupancies between the two groups. Young adults have more state 5 appearances during scanning. This age-related differences can also provide a possible explanation of the different development trends in FNC [11] [52]. We believe that the temporal and spatial feature of this specific state carry subtle developmental changes that underlies the emergence of the brain’s cognitive and emotional functions.
IV. Discussion
In this paper, we applied the proposed sparse HMM on the PNC data and estimated the dFNCs at the population level. Moreover, a two-group comparison between young adults and children also demonstrated the approach. The spatial and temporal feature of the brain dynamics were extracted and analyzed. In both age groups, we found that state 1 and state 2 have limited difference in the connectivity matrix with each other and were occupied most of the time during scanning. The similar two states were also confirmed in the whole data set. This finding agrees with the results from [21], [51], in which the researchers concluded that FNC changes gradually rather than drastically as the age increases. The increased intra-DMN connectivity was observed in state 5 of young adults. This is consistent with the finding in [53], [54]. In Fair et al. [53], they found the 13 published ROIs formed an integrated DMN in young adults compared to less connected networks in children. The decreased inter-network connectivity between DMN and CN was observed in state 5. This finding was consistent with previous work showing that decreased connectivity between DMN and the other ICNs appeared in two states of the dFNCs which constituted a core feature of the adults’ state repertoire [11]. The functional development of the brain systems includes the combination of the increasing long-range connections with the decreasing short-range connections [55], [56]. In other words, the development of brain transitions from segregated to integrated. As the brain matures, the whole brain FNC will change into a more interconnected network from a relatively local distributed network [57]. We also used the FO as a feature to illustrate the difference between children and young adults. There is a significant difference between the distribution of the FO in the two age groups. The consistent appearance of hierarchical organization in whole data set and children is in line with the previous study that the human brain dynamics are not randomly organized [22]. Moreover, the disappearance of this organization in young adults implies that this organization is also subject-specific.
Compared to the sliding window based method, sparse HMM can capture brain connectivity even in a small time scale. As we showed in the simulation section, the preset window length in the sliding window method is critical to the success of the sliding window method. Generally, a choice between 25 – 50 TR is necessary for the success of the sliding window method. In the sparse HMM approach, one can estimate the connectivity with only one time-course theoretically. Compared to other HMM-based models, our model has several advantages. As a regularized form of a generative model, the sparse HMM has a simple mathematical form with little assumption on the structure of the covariance matrix. To solve the optimization, one only needs to combine the traditional EM algorithm with the glasso with multiple initializations. The fewer assumptions on the covariance matrix gave a larger feasible region and the sparsity regularizer can lead to a solution that can be easily interpreted.
As a traditional model, many HMM-based methods have been applied and proven to be useful in revealing the dynamic nature of brain functional connectivity. In resting state, the dFNCs have been estimated based on fMRI and other brain imaging modalities and the transitions between dFNCs are shown not to be random [22], [24], [25]. In task fMRI data, the HMM can identify fast transient and short time fluctuations [23], [26]. However, it should be noted that these methods come with different motivations and hence have key differences in the mathematical models. More precisely, these methods have different assumptions on the covariance matrices. In Evani et al.’s work [24], they assumed that the covariance matrix can be approximated by the weighted summation of a family of rank-1 sparse basis matrices, i.e. , where is the state-related weight vector and is the rank-1 sparse connectivity pattern. This basis decomposition idea made the result to be more interpretable than the other dFNC estimation approaches. However, the approximation can be problematic since the approximated can be singular without further regularization, which might lead to an early stop in an EM iteration. The optimization is also complicated in that case. In Chen et al. ’s paper [25], the covariance matrices were assumed to be diagonal and only the mean vectors of the normal distribution were used for analysis as activation patterns. Moreover, in [22], [23], [26], the HMM model was employed to discover the fast transient networks from magnetoencephalography (MEG) data as well as dynamic brain networks from fMRI data. In their approach, no regularization except variational inference was added to improve the computational efficiency of HMM on large data sets. With a dense connectivity matrix setting, it is hard for one to interpret the result.
There are mainly two drawbacks of the sparse HMM approach: the model comparison and the local minimum. As we did in the postprocessing step, we need to find a method to match the states from independent realizations of the algorithm. Yang et al. proposed a greedy algorithm for matching the results from several realizations [48] and the technique was also applied in [25] by matching the correlation between the mean activation maps. Similarly, Vidaurre et al. also used the Hungarian algorithm based on the correlation [26]. However, the HMM cannot guarantee the correlation matrix of mean vectors and correlation matrix between covariance matrices are similar to each other. We used the combination of the Hungarian and the greedy method, to avoid mismatching. Both a finer matching strategy and a measure of similarity are necessary for perfect matching. Recently, the research on the Wasserstein distance [58] has the potential to offer better matching of complicated distributions or stochastic processes. The second problem is that the EM algorithm returns local minima and multi-initialization in optimization. Comparatively, the sliding window method just requests repeating the k-means clustering several times, which will be more efficient than the HMM-based methods. In [27], the authors proposed a backward greedy tuning algorithm to address this problem as well as the model selection. But the deleting and combining of the states should be more cautious because one may discard or merge a less frequently occurring but meaningful brain state in the process.
V. Conclusion
In this paper, we applied the proposed sparse HMM on resting fMRI data to investigate brain dynamics. The underlying FNCs were discretized to be discrete states, and the whole brain FNCs were modeled with precision matrices of a set of Gaussian distributions. The model was validated with both simulation data and the real rs-fMRI time series (e.g., PNC data). The performance of the sparse HMM was compared with the popular sliding window approach on simulation data. The results indicated that our model could estimate the brain FNC even with limited observations or abrupt changes which could be overlooked by the sliding window approach. As a real application, we analyzed the PNC data with our model and compared the differences of brain dFNCs between young adults and children. By comparing the FNC of two age groups, we found that young adults’ brains have well-developed intranetwork connectivity while the children’s brains are characterized with stronger inter-network connectivity. The hierarchical temporal organization of brain states in children and at the whole data set level was also confirmed through the transition matrix and correlation analysis of the FO of states. In brief, the proposed sparse HMM method is an alternative approach to the sliding window method and the other HMM-based models but with better performance. It can efficiently estimate both static and abrupt FNCs with sparse precision matrices. As a simple and powerful tool, it can be used for various brain network analysis tasks.
Supplementary Material
Acknowledgment
The authors would like to thank the partial support by NIH (P20GM109068, R01MH104680, R01MH107354, R01MH103220, R01EB020407) and NSF (#1539067).
Footnotes
References
- [1].Demirci O, Clark VP, Magnotta VA, Andreasen NC, Lauriello J, Kiehl KA, Pearlson GD, and Calhoun VD, “A review of challenges in the use of fmri for disease classification/characterization and a projection pursuit application from a multi-site fmri schizophrenia study,” Brain imaging and behavior, vol. 2, no. 3, pp. 207–226, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Arbabshirani MR, Plis S, Sui J, and Calhoun VD, “Single subject prediction of brain disorders in neuroimaging: promises and pitfalls,” NeuroImage, vol. 145, pp. 137–165, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Zhi D, Ma X, Lv L, Ke Q, Yang Y, Yang X, Pan M, Qi S, Jiang R, Du Y et al. , “Abnormal dynamic functional network connectivity and graph theoretical analysis in major depressive disorder,” in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2018, pp. 558–561. [DOI] [PubMed] [Google Scholar]
- [4].de Lacy N and Calhoun VD, “Dynamic connectivity and the effects of maturation in youth with attention deficit hyperactivity disorder,” Network Neuroscience, pp. 1–22, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Kaiser RH, Whitfield-Gabrieli S, Dillon DG, Goer F, Beltzer M, Minkel J, Smoski M, Dichter G, and Pizzagalli DA, “Dynamic resting-state functional connectivity in major depression,” Neuropsychopharmacology, vol. 41, no. 7, p. 1822, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Rashid B, Damaraju E, Pearlson GD, and Calhoun VD, “Dynamic connectivity states estimated from resting fmri identify differences among schizophrenia, bipolar disorder, and healthy control subjects,” Frontiers in human neuroscience, vol. 8, p. 897, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Khalili-Mahani N, Zoethout RM, Beckmann CF, Baerends E, de Kam ML, Soeter RP, Dahan A, van Buchem MA, van Gerven JM, and Rombouts SA, “Effects of morphine and alcohol on functional brain connectivity during resting state: A placebo-controlled crossover study in healthy young men,” Human brain mapping, vol. 33, no. 5, pp. 1003–1018, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Ma N, Liu Y, Li N, Wang C-X, Zhang H, Jiang X-F, Xu H-S, Fu X-M, Hu X, and Zhang D-R, “Addiction related alteration in resting-state brain connectivity,” Neuroimage, vol. 49, no. 1, pp. 738–744, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Meda SA, Stevens MC, Folley BS, Calhoun VD, and Pearlson GD, “Evidence for anomalous network connectivity during working memory encoding in schizophrenia: an ica based analysis,” PloS one, vol. 4, no. 11, p. e7911, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Liu X, Gao Y, Di Q, Hu J, Lu C, Nan Y, Booth JR, and Liu L, “Differences between child and adult large-scale functional brain networks for reading tasks,” Human brain mapping, vol. 39, no. 2, pp. 662–679, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [11].Hutchison RM and Morton JB, “Tracking the brain’s functional coupling dynamics over development,” Journal of Neuroscience, vol. 35, no. 17, pp. 6849–6859, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [12].Franco AR, Pritchard A, Calhoun VD, and Mayer AR, “Interrater and intermethod reliability of default mode network selection,” Human brain mapping, vol. 30, no. 7, pp. 2293–2303, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Fox MD, Snyder AZ, Vincent JL, Corbetta M, Van Essen DC, and Raichle ME, “The human brain is intrinsically organized into dynamic, anticorrelated functional networks,” Proceedings of the National Academy of Sciences, vol. 102, no. 27, pp. 9673–9678, 2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Fox MD and Raichle ME, “Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging,” Nature reviews neuroscience, vol. 8, no. 9, p. 700, 2007. [DOI] [PubMed] [Google Scholar]
- [15].Hutchison RM, Womelsdorf T, Allen EA, Bandettini PA, Calhoun VD, Corbetta M, Della Penna S, Duyn JH, Glover GH, Gonzalez-Castillo J et al. , “Dynamic functional connectivity: promise, issues, and interpretations,” Neuroimage, vol. 80, pp. 360–378, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Hutchison RM, Womelsdorf T, Gati JS, Everling S, and Menon RS, “Resting-state networks show dynamic functional connectivity in awake humans and anesthetized macaques,” Human brain mapping, vol. 34, no. 9, pp. 2154–2177, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [17].Allen EA, Damaraju E, Plis SM, Erhardt EB, Eichele T, and Calhoun VD, “Tracking whole-brain connectivity dynamics in the resting state,” Cerebral cortex, vol. 24, no. 3, pp. 663–676, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Calhoun VD, Miller R, Pearlson G, and Adalı T, “The chronnectome: time-varying connectivity networks as the next frontier in fmri data discovery,” Neuron, vol. 84, no. 2, pp. 262–274, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Sakoğlu Ü, Pearlson GD, Kiehl KA, Wang YM, Michael AM, and Calhoun VD, “A method for evaluating dynamic functional network connectivity and task-modulation: application to schizophrenia,” Magnetic Resonance Materials in Physics, Biology and Medicine, vol. 23, no. 5-6, pp. 351–366, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Damaraju E, Allen EA, Belger A, Ford J, McEwen S, Mathalon D, Mueller B, Pearlson G, Potkin S, Preda A et al. , “Dynamic functional connectivity analysis reveals transient states of dysconnectivity in schizophrenia,” NeuroImage: Clinical, vol. 5, pp. 298–308, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Cai B, Zille P, Stephen JM, Wilson TW, Calhoun VD, and Wang YP, “Estimation of dynamic sparse connectivity patterns from resting state fmri,” IEEE transactions on Medical Imaging, vol. 37, no. 5, pp. 1224–1234, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Vidaurre D, Smith SM, and Woolrich MW, “Brain network dynamics are hierarchically organized in time,” Proceedings of the National Academy of Sciences, vol. 114, no. 48, pp. 12 827–12 832, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Baker AP, Brookes MJ, Rezek IA, Smith SM, Behrens T, Smith PJP, and Woolrich M, “Fast transient networks in spontaneous human brain activity,” Elife, vol. 3, p. e01867, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Eavani H, Satterthwaite TD, Gur RE, Gur RC, and Davatzikos C, “Unsupervised learning of functional network dynamics in resting state fmri,” in International Conference on Information Processing in Medical Imaging. Springer, 2013, pp. 426–437. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Chen S, Langley J, Chen X, and Hu X, “Spatiotemporal modeling of brain dynamics using resting-state functional magnetic resonance imaging with gaussian hidden markov model,” Brain connectivity, vol. 6, no. 4, pp. 326–334, 2016. [DOI] [PubMed] [Google Scholar]
- [26].Vidaurre D, Abeysuriya R, Becker R, Quinn AJ, Alfaro-Almagro F, Smith SM, and Woolrich MW, “Discovering dynamic brain networks from big data in rest and task,” Neuroimage, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Städler N and Mukherjee S, “Penalized estimation in high-dimensional hidden markov models with state-specific graphical models,” The Annals of Applied Statistics, pp. 2157–2179, 2013. [Google Scholar]
- [28].McGibbon RT, Ramsundar B, Sultan MM, Kiss G, and Pande VS, “Understanding protein dynamics with l1-regularized reversible hidden markov models,” in Proceedings of the 31st International Conference on International Conference on Machine Learning-Volume 32. JMLR. org, 2014, pp. II–1197. [Google Scholar]
- [29].Welch LR, “Hidden markov models and the baum-welch algorithm,” IEEE Information Theory Society Newsletter, vol. 53, no. 4, pp. 10–13, 2003. [Google Scholar]
- [30].Friedman J, Hastie T, and Tibshirani R, “Sparse inverse covariance estimation with the graphical lasso,” Biostatistics, vol. 9, no. 3, pp. 432–441, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [31].Nasrabadi NM, “Pattern recognition and machine learning,” Journal of electronic imaging, vol. 16, no. 4, p. 049901, 2007. [Google Scholar]
- [32].Rabiner LR, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989. [Google Scholar]
- [33].Forney GD, “The viterbi algorithm,” Proceedings of the IEEE, vol. 61, no. 3, pp. 268–278, 1973. [Google Scholar]
- [34].Figueiredo MA, Leitão JM, and Jain AK, “On fitting mixture models,” in International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition. Springer, 1999, pp. 54–69. [Google Scholar]
- [35].Barron A, Rissanen J, and Yu B, “The minimum description length principle in coding and modeling,” IEEE Transactions on Information Theory, vol. 44, no. 6, pp. 2743–2760, 1998. [Google Scholar]
- [36].Erhardt EB, Allen EA, Wei Y, Eichele T, and Calhoun VD, “Simtb, a simulation toolbox for fmri data under a model of spatiotemporal separability,” Neuroimage, vol. 59, no. 4, pp. 4160–4167, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [37].Satterthwaite TD, Connolly JJ, Ruparel K, Calkins ME, Jackson C, Elliott MA, Roalf DR, Hopson R, Prabhakaran K, Behr M et al. , “The philadelphia neurodevelopmental cohort: a publicly available resource for the study of normal and abnormal brain development in youth,” Neuroimage, vol. 124, pp. 1115–1119, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [38].Calhoun VD, Adali T, Pearlson GD, and Pekar J, “A method for making group inferences from functional mri data using independent component analysis,” Human brain mapping, vol. 14, no. 3, pp. 140–151, 2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [39].Calhoun VD and Adali T, “Multisubject independent component analysis of fmri: a decade of intrinsic networks, default mode, and neurodiagnostic discovery,” IEEE reviews in biomedical engineering, vol. 5, pp. 60–73, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [40].Erhardt EB, Rachakonda S, Bedrick EJ, Allen EA, Adali T, and Calhoun VD, “Comparison of multi-subject ica methods for analysis of fmri data,” Human brain mapping, vol. 32, no. 12, pp. 2075–2095, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [41].Faghiri A, Stephen JM, Wang Y-P, Wilson TW, and Calhoun VD, “Changing brain connectivity dynamics: From early childhood to adulthood,” Human Brain Mapping, no. August 2017, pp. 1108–1117, 2017. [Online]. Available: http://doi.wiley.com/10.1002/hbm.23896 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [42].Du Y and Fan Y, “Group information guided ica for fmri data analysis,” Neuroimage, vol. 69, pp. 157–197, 2013. [DOI] [PubMed] [Google Scholar]
- [43].Du Y, Allen EA, He H, Sui J, Wu L, and Calhoun VD, “Artifact removal in the context of group ica: A comparison of single-subject and group approaches,” Human brain mapping, vol. 37, no. 3, pp. 1005–1025, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [44].Cetin MS, Houck JM, Rashid B, Agacoglu O, Stephen JM, Sui J, Canive J, Mayer A, Aine C, Bustillo JR et al. , “Multimodal classification of schizophrenia patients with meg and fmri data using static and dynamic connectivity measures,” Frontiers in neuroscience, vol. 10, p. 466, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [45].Zille P, Calhoun VD, Stephen JM, Wilson TW, and Wang Y-P, “Fused estimation of sparse connectivity patterns from rest fmriapplication to comparison of children and adult brains,” IEEE transactions on medical imaging, vol. 37, no. 10, pp. 2165–2175, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [46].Mueller JW and Jaakkola T, “Principal differences analysis: Interpretable characterization of differences between distributions,” in Advances in Neural Information Processing Systems, 2015, pp. 1702–1710. [Google Scholar]
- [47].Givens CR, Shortt RM et al. , “A class of wasserstein metrics for probability distributions.” The Michigan Mathematical Journal, vol. 31, no. 2, pp. 231–240, 1984. [Google Scholar]
- [48].Yang Z, LaConte S, Weng X, and Hu X, “Ranking and averaging independent component analysis by reproducibility (raicar),” Human brain mapping, vol. 29, no. 6, pp. 711–725, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [49].Xia M, Wang J, and He Y, “Brainnet viewer: a network visualization tool for human brain connectomics,” PloS one, vol. 8, no. 7, p. e68910, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [50].Maaten L. v. d. and Hinton G, “Visualizing data using t-sne,” Journal of machine learning research, vol. 9, no. Nov, pp. 2579–2605, 2008. [Google Scholar]
- [51].Faghiri A, Stephen JM, Wang Y-P, Wilson TW, and Calhoun VD, “Changing brain connectivity dynamics: From early childhood to adulthood,” Human brain mapping, vol. 39, no. 3, pp. 1108–1117, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [52].Qin J, Chen S-G, Hu D, Zeng L-L, Fan Y-M, Chen X-P, and Shen H, “Predicting individual brain maturity using dynamic functional connectivity,” Frontiers in human neuroscience, vol. 9, p. 418, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [53].Fair DA, Cohen AL, Dosenbach NU, Church JA, Miezin FM, Barch DM, Raichle ME, Petersen SE, and Schlaggar BL, “The maturing architecture of the brain’s default network,” Proceedings of the National Academy of Sciences, vol. 105, no. 10, pp. 4028–4032, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [54].Supekar K, Musen M, and Menon V, “Development of large-scale functional brain networks in children,” PLoS biology, vol. 7, no. 7, p. e1000157, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [55].Stevens MC, Pearlson GD, and Calhoun VD, “Changes in the interaction of resting-state neural networks from adolescence to adulthood,” Human brain mapping, vol. 30, no. 8, pp. 2356–2366, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [56].Fair DA, Dosenbach NU, Church JA, Cohen AL, Brahmbhatt S, Miezin FM, Barch DM, Raichle ME, Petersen SE, and Schlaggar BL, “Development of distinct control networks through segregation and integration,” Proceedings of the National Academy of Sciences, vol. 104, no. 33, pp. 13 507–13 512, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [57].Fair DA, Cohen AL, Power JD, Dosenbach NU, Church JA, Miezin FM, Schlaggar BL, and Petersen SE, “Functional brain networks develop from a local to distributed organization,” PLoS computational biology, vol. 5, no. 5, p. e1000381, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [58].Chen Y, Ye J, and Li J, “A distance for hmms based on aggregated wasserstein metric and state registration,” in European Conference on Computer Vision. Springer, 2016, pp. 451–466. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
