Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jul 21.
Published in final edited form as: Med Image Comput Comput Assist Interv. 2020 Sep 29;12267:613–624. doi: 10.1007/978-3-030-59728-3_60

Deep Representation Learning For Multimodal Brain Networks

Wen Zhang 1, Liang Zhan 2, Paul Thompson 3, Yalin Wang 1
PMCID: PMC8293685  NIHMSID: NIHMS1620849  PMID: 34296225

Abstract

Applying network science approaches to investigate the functions and anatomy of the human brain is prevalent in modern medical imaging analysis. Due to the complex network topology, for an individual brain, mining a discriminative network representation from the multimodal brain networks is non-trivial. The recent success of deep learning techniques on graph-structured data suggests a new way to model the non-linear cross-modality relationship. However, current deep brain network methods either ignore the intrinsic graph topology or require a network basis shared within a group. To address these challenges, we propose a novel end-to-end deep graph representation learning (Deep Multimodal Brain Networks - DMBN) to fuse multimodal brain networks. Specifically, we decipher the cross-modality relationship through a graph encoding and decoding process. The higher-order network mappings from brain structural networks to functional networks are learned in the node domain. The learned network representation is a set of node features that are informative to induce brain saliency maps in a supervised manner. We test our framework in both synthetic and real image data. The experimental results show the superiority of the proposed method over some other state-of-the-art deep brain network models.

Keywords: Multimodality, Brain networks, Network representation, Deep learning, Graph topology

1. Introduction

There is growing scientific interest in understanding functional and structural organizations of the human brain from a large scale of multimodal brain imaging data. In medical imaging analysis, one of the popular ways for this task is to explore brain regional connections (i.e., brain networks) measured from the brain imaging signals. The topological patterns of brain networks are closely related to the brain functional organizations [4] and the connection breakdown between the relevant brain regions has an intimate association with the progress of neurodegenerative diseases [12,22] or normal brain developments [36]. However, patterns of focal damages in brain networks are different across modalities, making the mining of multimodal network changes difficult.

Deep learning methods have been successfully applied to extract biological information from the neuroimaging data [24,29]. Most of the prior brain network analysis represent graph structure as a grid-like image to enable convolutional computation [21,7,34]. More recently, deep graph convolutional networks (GCNs) have been introduced to brain network research [1,14,16]. These studies perform the localized convolutional operation at either graph nodes or edges. They can be categorized into the graph spectral convolution [1,16] and the graph spatial convolution [9]. The former approach is suitable for node-centric problems defined on the fixed-sized neighborhood graphs. For graph-centric problems, the spectral method requires a group-wise graph structure before approximating the spectral graph convolution. Therefore, its performance to a large extent depends on the predefined network basis. However, the existing framework [14] is designed for a single modality and lacks a well defined k-hop convolutional operator on each node. This makes the multimodal brain network fusion intractable in the node domain and thus difficult to draw brain saliency maps.

In this paper, we propose a novel GCN model for multimodal brain networks analysis. Two naturally coherent brain network modalities, i.e., functional and structural brain networks, are considered. The structural network acts as the anatomical skeleton to constrain brain functional activities and, in return, consistent functional activities reshape the structural network in the long term [4]. Hence, we argue the existence of a high-level dependency, namely networks communication [2], across them. It is deciphered by a deep encoding-decoding graph network in our model. Meanwhile, the obtained node features help representation learning of brain network structure in a supervised manner. The contributions can be summarised into four-folds. (1) It is the first paper using a deep graph learning to model brain functions evolving from its structural basis. (2) We propose an end-to-end automatic brain network representation framework based on the intrinsic graph topology. (3) We model the cross-modality relationship through a deep graph encoding-decoding process based on the proposed multi-stage graph convolutional kernel. (4) We draw graph saliency maps subject to the supervised tasks, enabling phenotypic and disease-related biomarker detection.

2. Methodology

Multimodal Brain Network Data.

A brain network uses a graph structure to describe interconnections between brain regions and is a weighted graph G = {V, E, X}, where V={vi}i=1N is the node set indicating brain regions, E = {ϵi,j} is the edges set and X = {xi,j} is the corresponding edge weight; For a given subject, we have a pair of networks {Gf, Gd}, where Gf = {V, Ef, Xf} represents the functional brain network and Gd = {V, Ed, Xd} is the structural brain network. These two networks share the same set of nodes, i.e., using an identical definition of brain regions, but differ in network topology and edge weights. An edge weight xi,jf in Gf is the correlation of fMRI signals between node vi and vj, while a structural edge weight xi,jd in Gd is the probability of fiber tractography between them.

2.1. Multi-Stage Graph Convolution Kernel

A brain structural network can be interpreted as a freeway net where biological information such as brain functional signals flows from node to node. In the brain network, a node shall be affected by its neighboring nodes and their affection is negatively correlated with the shortest network distance [27]. To encode these node-to-node patterns, we adopt the spatial graph convolution kernel which will give the node embedding features with respect to the local graph topology. It defines a way to aggregate node features in a given size of neighborhood, e.g., 1-hop connections.

Given a target node vi and its neighbourhood graph topology GN(vi), the graph convolution kernel first collects node features hvi of its immediate neighbours:

AGG(hvi)=vjN(vi)hvj·xi,j, (1)

and then updates the node feature as:

hvi=σ(AGG(hvi)·w). (2)

Here, σ is a non-linear activation and wRF×F is a learnable weight matrix of a fully-connected layer (FC). Previous research proves that a k-hop convolution kernel can be divided into k 1-hop convolutions [15]. Therefore, we stack several 1-hop convolutions to increase size of the effective receptive field on graphs.

A potential problem with Eq. 1 is its poor generalization of the local aggregation, i.e., the aggregation weight is fixed to be xi,j. Though these predefined values reflect the brain biological profiles, they might not be optimal for brain network encoding, especially for the cross-modality learning pursued by our research. For example, brain regions that are interconnected with large weights in the brain structural network are not guaranteed to be more strongly connected in the brain functional network as well [20]. Besides, compared with brain structural networks, brain functional networks are more dynamic and fluctuant on the edge connections. Therefore, the dynamic adjustment of the aggregation weights during graph learning is favored. To this end, we adopt the idea of graph attention network (GAT) [32]. Given each pair of node features, their dynamic edge weights are learned by a single-layer feedforward neural network, i.e., XATT={xi,jATT}={fatt(hvi,hvj)}. More specifically, we first increase the expression power of the node features by using a shared linear transformation, h˜vi=hvi·w, where wRF×F is a learned parameter. Then, we use a single-layer feedforward neural network to derive the edge weight:

x˜i,j=σ(aT[h˜vih˜vj]), (3)

where ⊕ is the concatenate operator and aR2F is a parameter of the feedforward network. To assure generalization of Eq. 3 across different nodes, a softmax layer is append for normalization of the neighbourhood,

xi,jATT=exp(σ(aT[h˜vih˜vj]))kN(vi)exp(σ(aT[h˜vih˜vk])). (4)

Compared with xi,j, xi,jATT is associated with the node order and thus is asymmetric on edge ϵi,j. Besides, it is free of local network topology. In addition to the graph attention based aggregation (Fig. 1, A), we also propose a binary symmetric aggregation defined with a threshould function δ(xi,j) (Fig. 1, B). δ(xi,j) thresholds an edge by a given threshould value γ, e.g., aggregation weight will be 1 if xi,j > γ, otherwise 0. We set γ = 0 empirically in this study. This process follows an assumption that two brain regions are highly interactive in functional brain network as long as they are structurally connected [27]. To integrate all of the aggregation mechanisms, we design a multi-stage graph convolution kernel (MGCK). Eq. 1 is thus updated as:

AGG(hvi)=vjN(vi)hvj·(xi,j+α)·(xi,jATT+βδ(xi,j))=vjN(vi)hvj·(xi,jxi,jATT+βxi,j+αxi,jATT+αβδ), (5)

where α and β are learnable parameters balancing different aggregation mechanisms. In the above equation, we have 4 different aggregation weights. xi,jxi,jATT and xi,j are the pre-defined network connections with and without attention weights. xi,jATT is the attention aggregation alone and δ is the threshold connections. In the end, we introduce the multi-head learning [31] to stabilize the aggregation in MGCK. K independent multi-stage aggregation are conducted and aggregated features are concatenated before feeding to a FC layer. Accordingly, Eq. 2 is updated as:

h^vi=k=1K[σ(AGGk(hvi)·w)]. (6)

Previous research indicates that graph convolution network performs poorly with a deep architecture due to the high complexity of back-propagation in the deep layers. To address this problem, residual block in GCN [17] is proposed. It is inspired by the success of ResNet [10] for image data. We add the residual connection after MGCK,

hvi=F(h^vi,w^)+wmhvi. (7)

F is a FC layer parameterized by w^. Parameter wm is designed to match the dimensions.

Fig. 1:

Fig. 1:

Multi-stage graph convolution kernel (MGCK). Three aggregation mechanisms are dynamical combined, including the graph attention weight xi,jATT (A), the original edge weight xi,j (B), and the binary weight δ(xi,j) (C).

2.2. Deep Multimodal Brain Networks (DMBN)

We show the pipeline of DMBN in Fig. 2. It generates the multimodal graph node representations for different learning tasks. There are two parts in DMBN. The first part is for cross-modality learning via an encoding-decoding network. Here, we construct brain functional network from brain structural network. The brain functional network contains both positive and negative connections. These two types of brain functional connectivities yield a distinct relationship with brain structural network [11,26]. Hence, we separate their encoding into two independent encoding networks. For each graph encoder, we use several MGCK layers to aggregate node features from diverse ranges of the neighborhood in structural network. The generated node features are then fed into the decoding networks to reconstruct the positive and negative connections respectively. Specifically, for each undirected edge ei,j, we define the reconstructed links as:

x^i,j=11+exp(hviT·Θ·hvi), (8)

where hvi is a node feature vector in the network embedding space and Θ is a learnable layer weight. Eq. 8 maps the deep node embeddings {hvi} to a connection matrix {x^i,j} where each element ranges from 0 to 1 consisting with the functional connections.

Fig. 2:

Fig. 2:

Pipeline of DMBN. The structural network is fed into two independent encoding-decoding networks to generate the cross-modality encoding of the positive and negative functional connections. Meanwhile, the node features from these two networks are combined and serve as the multimodal graph representations for the supervised learning tasks via a MLP network. During this process, a brain saliency map is derived.

The second part of our model is a supervised learning. The node embedding features (hv) from the positive and negative encoding networks are concatenated node-wisely and processed by an MLP. Since our tasks are graph level learning, a global pooling is applied before the last FC layer to remove the effect of node orders. Along with the supervised learning tasks, it is important to understand the key brain regions closely associated with the tasks. Inspired by the classic activation maps [1], a graph localization strategy is carried out by learning contribution scores of graph nodes. As shown in Fig. 2, suppose the final node feature matrix consists of F channels for N nodes, a global mean pooling generates a channel-wise vector treated as the network feature. Therefore, each channel has a corresponding weight, wi, learned by the last FC layer. To obtain the node-wise importance score, we warp it back by an inner product between node features and channel weights, i.e., hv · WT. In the end, we rank the top-k nodes for each subject and conduct a group voting to obtain the group-wise saliency map.

There are 3 loss terms in DMBN controlling the brain network reconstruction and supervised learning tasks (Eq. 9). The reconstruction loss consists of the global and local decoding losses to preserve different levels of graph topology.

Lall=μ1Lglobal+μ2Llocal+Lpreds, (9)

1). Global Decoding Loss.

This term evaluates the averaged performance of edge reconstruction in the target network.

Lglobal=1|E|ei,jai,j(x^i,jf+x^i,jfxi,jf)2, (10)

where ai,j is the additional penalty of the edge reconstruction. Here, we set it as eabs(xi,jf), which gives the higher weights for stronger connections in brain functional network. x^f+ and x^f indicate the decoded network connections from the positive and negative flow of encoding.

2). Local Decoding Loss.

The cross-modality reconstruction of brain networks is challenging, hence we do not expect a full recovery of all edges but rather the reconstruction of local graph structure on important connections, e.g., edges with strong connections in both structural and functional networks. We adopt the first-order proximity [33] to capture the local structure. The loss function is defined as:

Llocal=i=1n1|Nid|jNideδ(xi,jd)hvifhvjf22, (11)

where |Nid| is the number of neighbouring nodes of vi in brain structural network. δ(xi,jd) is a threshold function which favors strong generalization. Eq. 11 generalizes Laplacian Eigenmaps [3] and drives nodes with similar embedding features together.

3). Supervised Loss.

The loss function for prediction is defined as:

Lpred=1Ki=1Kyi·log(fpred(hvi)), (12)

where K is the number of subjects and fpred is a function learned by the MLP network.

3. Experiment

3.1. Gender Prediction

Dataset.

The data are from the WU-Minn HCP 1200 Subjects Data Release [30]. We include 746 healthy subjects (339 males, 407 females), each has high-quality resting fMRI and dMRI data. The functional network is processed using CONN toolbox [35] and structural connectivity is measured by using FSL toolbox [13]. Here we try to predict the gender based on the multimodal brain network topology. Previous research has shown the strong relationship between gender and brain connectivity patterns [25].

Experiment Setup.

We select 5 state-of-the-art baseline models for comparison, where 3 of them, i.e. tBNE [6], MK-SVM [8] and mCCA + ICA [28], are transitional machine learning algorithms while the rest two, i.e. BrainNetCNN [14] and Brain-Cheby [16] use deep models. In addition, 5 variant models of MDBN are tested in the experiments as an ablation study. We apply the 5-fold cross-validation for all methods. In our model setting, the positive connection encoding has 5 cascade MGCK layers and negative connection encoding has 4 MGCK layers. In each encoding, each of MGCKs has the feature dimension [128] and 4-heads learning. We report the statistical results with three evaluation metrics: accuracy, precision, and F1 scores. Besides, we take a grid search to decide hyperparameters μ1 and μ2. Based on the empirical knowledge, we set the search range for μ1 as [10, 1, 0.1, 0.01] and μ2 as [5, 1, 0.5, 0.1]. The best result appears at μ1 = 1 and μ2 = 0.5. Details can be found in Supplementary Fig. 1.

Results.

As shown in the Tab. 1 (HCP), our model achieves the highest accuracy (ACC > 81.9%) in the gender prediction among all the methods and significantly outperforms the others with at least 8% and 10% increases in accuracy and F1 scores, respectively. Generally, deep models are superior to the traditional node embedding method (tBNE). We notice that, when we remove the cross-modality learning, i.e., variant methods denoted by w/o Recon, the performance drops significantly. Though they are still comparable to the other baselines, the training process is unstable with a high variance. The cross-modality learning enables node-level learning to be effective and consequently affects further graph-level learning. In addition, the 10 most important brain regions affecting the gender prediction are shown in Supplementary Fig. 2. These regions spread at the cortical areas including the frontal and orbital gyrus, precentral gyrus, insular gyrus, as well as the subcortical areas such as basal ganglia. All those regions play vital roles in regulating cognitive functioning, motor and emotion controls, which, with a high probability, exert the gender discrepancy [23,25].

Table 1:

Performance of gender prediction in the HCP data.

Method HCP (Gender) PPMI (Disease)

Acc Prec F1 – Score Acc Prec F1 – Score

tBNE [6] 0.543 0.497 0.503 0.580 0.597 0.530
MK-SVM [8] 0.481 0.438 0.524 0.587 0.487 0.568
mCCA + ICA [28] 0.680 0.703 0.691 0.640 0.660 0.622
Brain-Cheby [16] 0.739 0.740 0.739 0.635 0.622 0.628
BrainNetCNN [14] 0.734 0.775 0.684 0.673 0.695 0.778

w/o Recon+ 0.738 0.692 0.767 0.688 0.727 0.786
w/o TAGG&Recon+ 0.699 0.696 0.738 - - -
w/o AAGG&Recon+ 0.681 0.689 0.735 - - -

MDBN w/o Global 0.784 0.798 0.799 - - -
MDBN w/o Local 0.793 0.814 0.824 - - -
MDBN 0.819* 0.836* 0.845* 0.728* 0.859* 0.735
*

stands for significance.

+

indicates the variant model using a single modality.

Ablation Analysis.

We explore influence of each element in our model (Tab. 1). We first remove the decoding network that makes our model a single modality learning (w/o Recon). Under such a configuration, our model is still comparable to the baselines. However, the decreased performance suggests the cross-modality is indispensable to an informative network representation. Based on this setting, we further evaluate the role of each aggregation mechanism in MGCK. We remove the threshold aggregation weight (w/o TAGG&Recon) and graph attention aggregation (w/o AAGG&Recon) respectively. All of them cause a significant decrease in performance. In addition to the single modality learning, we also validate the importance of different reconstruction losses in multimodal learning. Missing the local (MDBN w/o Local) or global (MDBN w/o Global) losses results in around 3% downgrade in prediction accuracy. Meanwhile, the global reconstruction loss yields a larger weight than the local reconstruction loss. Since the global loss considers all of the edges in the functional network, it contains relatively more fruitful information than the local loss which focuses on the direct edges in the structural network. However, they are complementary to each other.

Cross-Modality Learning.

To validate the efficacy of cross-modality learning, we turn off the prediction tasks, i.e., only keeping the reconstruction losses during training. Results have been shown in Fig. 3. We present the predicted functional networks of a randomly selected sample and the group average of the whole testing data. From the sparse structural networks, the corresponding functional connections have been correctly predicted and major patterns of the local network connections are captured. To further prove the accuracy, we conduct the statistical analysis on edges. Both direct and indirect edges in the target functional network are highly correlated with the predicted edges (Spearman correlation, overall is rS = 0.83 with p < 10−4), where the direct edges, rS = 0.84, are slightly greater than the indirect edges, rS = 0.82. We also prove the robustness of our model to the different sparsity levels of brain structural networks and results are shown in Supplementary Fig. 4.

Fig. 3:

Fig. 3:

The cross-modality learning results. The functional network is predicted (middle) from its structural network counterpart (left). We present the group averaged result and an individual sample. The statistical evaluation (Spearman correlation, rS) of reconstructed functional networks is conducted (right) and the predicted edge weights are significantly correlated with the ground truth data, rs = 0.83.

3.2. Disease Classification

In addition to the gender prediction in the healthy subjects, we retest our model on the disease classification. In this experiment, we include 323 subjects from Parkinson’s Progression Markers Initiative (PPMI) [18] and 224 of them are patients of Parkinson’s disease (PD). We follow the experimental setting in gender prediction. μ1 =0.5 and μ2 = 0.5 are used according to the grid search.

Classification Results.

We consider the state-of-the-art baseline methods for comparison. The results are shown in Tab. 1 (PPMI). Our model achieves the best prediction performance than other models (improving the accuracy by 5% than BrainNetCNN, 9% than Brain-Cheby and other baselines). Moreover, It also shows adding the cross-modality reconstruction do upgrade the performance. We locate the 10 key regions associating with the PD classification via the saliency map, see Supplementary Fig. 3. Most of the salient regions locate at the subcortical structures, such as the bilateral hippocampus and basal ganglia. These structures are conventionally conceived as the biomarkers of PD in medical imaging analysis [19,5].

4. Conclusion

We propose a novel multimodal brain network fusion framework based on a deep graph modal. The cross-modality network embedding is generated by an encoding-decoding network. The network embedding is also supervised by the prediction tasks. Eventually, the learned node features contribute to the brain saliency map for detecting disease-related biomarkers. In the future, we plan to extend our model to other learning tasks such as brain cortical parcellation and cognitive activity prediction.

Supplementary Material

DeepBrainNetworkSup

Acknowledgments

This work was supported in part by NIH (R21AG065942, RF1AG051710 and R01EB025032). We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.

References

  • 1.Arslan S, Ktena SI, Glocker B, Rueckert D: Graph saliency maps through spectral convolutional networks: Application to sex classification with brain connectivity. arXiv preprint arXiv:1806.01764 (2018) [Google Scholar]
  • 2.Avena-Koenigsberger A, Misic B, Sporns O: Communication dynamics in complex brain networks. Nature Reviews Neuroscience (2018) [DOI] [PubMed] [Google Scholar]
  • 3.Belkin M, Niyogi P: Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation 15(6), 1373–1396 (2003) [Google Scholar]
  • 4.Bullmore E, Sporns O: The economy of brain network organization. Nature Reviews Neuroscience 13(5), 336 (2012) [DOI] [PubMed] [Google Scholar]
  • 5.Camicioli R, Moore MM, Kinney A, Corbridge E, Glassberg K, Kaye JA: Parkinson’s disease is associated with hippocampal atrophy. Movement Disorders 18(7), 784–790 (2003) [DOI] [PubMed] [Google Scholar]
  • 6.Cao B, He L, Wei X, Xing M, Yu PS, Klumpp H, Leow AD: t-BNE: Tensor-based brain network embedding. In: Proceedings of SIAM International Conference on Data Mining (SDM) (2017) [Google Scholar]
  • 7.Deshpande G, Wang P, Rangaprakash D, Wilamowski B: Fully connected cascade artificial neural network architecture for attention deficit hyperactivity disorder classification from functional magnetic resonance imaging data. IEEE transactions on cybernetics (2015) [DOI] [PubMed] [Google Scholar]
  • 8.Dyrba M, Grothe M, Kirste T, Teipel SJ: Multimodal analysis of functional and structural disconnection in a lzheimer’s disease using multiple kernel svm. Human brain mapping 36(6), 2118–2131 (2015) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Hamilton W, Ying Z, Leskovec J: Inductive representation learning on large graphs. In: NIPS; (2017) [Google Scholar]
  • 10.He K, Zhang X, Ren S, Sun J: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016) [Google Scholar]
  • 11.Honey C, Sporns O, Cammoun L, Gigandet X, Thiran JP, Meuli R, Hagmann P: Predicting human resting-state functional connectivity from structural connectivity. Proceedings of the National Academy of Sciences 106(6), 2035–2040 (2009) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Jao T, Schröter M, Chen CL, Cheng YF, Lo CYZ, Chou KH, Patel AX, Lin WC, Lin CP, Bullmore ET: Functional brain network changes associated with clinical and biochemical measures of the severity of hepatic encephalopathy. Neuroimage (2015) [DOI] [PubMed] [Google Scholar]
  • 13.Jenkinson M, Beckmann CF, Behrens TE, Woolrich MW, Smith SM: Fsl. Neuroimage (2012) [DOI] [PubMed] [Google Scholar]
  • 14.Kawahara J, Brown CJ, Miller SP, Booth BG, Chau V, Grunau RE, Zwicker JG, Hamarneh G: Brainnetcnn: convolutional neural networks for brain networks; towards predicting neurodevelopment. NeuroImage (2017) [DOI] [PubMed] [Google Scholar]
  • 15.Kipf TN, Welling M: Semi-supervised classification with graph convolutional networks. In: International Conference on Learning Representations (ICLR) (2017) [Google Scholar]
  • 16.Ktena SI, Parisot S, Ferrante E, Rajchl M, Lee M, Glocker B, Rueckert D: Metric learning with spectral graph convolutions on brain connectivity networks. NeuroImage (2018) [DOI] [PubMed] [Google Scholar]
  • 17.Li G, Muller M, Thabet A, Ghanem B: Deepgcns: Can gcns go as deep as cnns? In: Proceedings of the IEEE International Conference on Computer Vision. pp. 9267–9276 (2019) [Google Scholar]
  • 18.Marek K, Jennings D, Lasch S, Siderowf A, Tanner C, Simuni T, Coffey C, Kieburtz K, Flagg E, Chowdhury S, et al. : The parkinson progression marker initiative (ppmi). Progress in neurobiology 95(4), 629–635 (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Obeso JA, Rodriguez-Oroz MC, Rodriguez M, Lanciego JL, Artieda J, Gonzalo N, Olanow CW: Pathophysiology of the basal ganglia in parkinson’s disease. Trends in neurosciences 23, S8–S19 (2000) [DOI] [PubMed] [Google Scholar]
  • 20.Osmanlioğlu Y, Tunç B, Parker D, Elliott MA, Baum GL, Ciric R, Satterthwaite TD, Gur RE, Gur RC, Verma R: System-level matching of structural and functional connectomes in the human brain. NeuroImage 199, 93–104 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Plis SM, Amin MF, Chekroud A, Hjelm D, Damaraju E, Lee HJ, Bustillo JR, Cho K, Pearlson GD, Calhoun VD: Reading the (functional) writing on the (structural) wall: Multimodal fusion of brain structure and function via a deep neural network based translation approach reveals novel impairments in schizophrenia. NeuroImage (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Repovs G, Csernansky JG, Barch DM: Brain network connectivity in individuals with schizophrenia and their siblings. Biological psychiatry (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Rijpkema M, Everaerd D, van der Pol C, Franke B, Tendolkar I, Fernández G: Normal sexual dimorphism in the human basal ganglia. Human brain mapping (2012) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Ronneberger O, Fischer P, Brox T: U-net: Convolutional networks for biomedical image segmentation. In: MICCAI. Springer; (2015) [Google Scholar]
  • 25.Ruigrok AN, Salimi-Khorshidi G, Lai MC, Baron-Cohen S, Lombardo MV, Tait RJ, Suckling J: A meta-analysis of sex differences in human brain structure. Neuroscience & Biobehavioral Reviews (2014) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Schwarz AJ, McGonigle J: Negative edges and soft thresholding in complex network analysis of resting state functional connectivity data. Neuroimage 55(3), 1132–1146 (2011) [DOI] [PubMed] [Google Scholar]
  • 27.Stam C, Van Straaten E, Van Dellen E, Tewarie P, Gong G, Hillebrand A, Meier J, Van Mieghem P: The relation between structural and functional connectivity patterns in complex brain networks. International Journal of Psychophysiology (2016) [DOI] [PubMed] [Google Scholar]
  • 28.Sui J, Pearlson G, Caprihan A, Adali T, Kiehl KA, Liu J, Yamamoto J, Calhoun VD: Discriminating schizophrenia and bipolar disorder by fusing fmri and dti in a multimodal cca+joint ica model. Neuroimage 57(3), 839–855 (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Suk HI, Shen D: Deep learning-based feature representation for ad/mci classification. In: MICCAI. Springer; (2013) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Van Essen DC, Smith SM, Barch DM, Behrens TE, Yacoub E, Ugurbil K: The WU-Minn human connectome project: an overview. Neuroimage 80, 62–79 (2013) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I: Attention is all you need. In: Advances in neural information processing systems. pp. 5998–6008 (2017) [Google Scholar]
  • 32.Velickovic P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017) [Google Scholar]
  • 33.Wang D, Cui P, Zhu W: Structural deep network embedding. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 1225–1234. ACM; (2016) [Google Scholar]
  • 34.Wang S, He L, Cao B, Lu CT, Yu PS, Ragin AB: Structural deep brain network mining. In: ACM SIGKDD. ACM; (2017) [Google Scholar]
  • 35.Whitfield-Gabrieli S, Nieto-Castanon A: Conn: a functional connectivity toolbox for correlated and anticorrelated brain networks. Brain connectivity 2(3), 125–141 (2012) [DOI] [PubMed] [Google Scholar]
  • 36.Zhang W, Shu K, Wang S, Liu H, Wang Y: Multimodal fusion of brain networks with longitudinal couplings. In: MICCAI. Springer; (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

DeepBrainNetworkSup

RESOURCES