Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Jan 27.
Published in final edited form as: Mach Learn Clin Neuroimaging (2024). 2024 Dec 6;15266:24–34. doi: 10.1007/978-3-031-78761-4_3

Brain-Cognition Fingerprinting via Graph-GCCA with Contrastive Learning

Yixin Wang 1, Wei Peng 2, Yu Zhang 3, Ehsan Adeli 2, Qingyu Zhao 4, Kilian M Pohl 2
PMCID: PMC11772010  NIHMSID: NIHMS2041676  PMID: 39872150

Abstract

Many longitudinal neuroimaging studies aim to improve the understanding of brain aging and diseases by studying the dynamic interactions between brain function and cognition. Doing so requires accurate encoding of their multidimensional relationship while accounting for individual variability over time. For this purpose, we propose an unsupervised learning model (called Contrastive Learning-based Graph Generalized Canonical Correlation Analysis (CoGraCa)) that encodes their relationship via Graph Attention Networks and generalized Canonical Correlational Analysis. To create brain-cognition fingerprints reflecting unique neural and cognitive phenotype of each person, the model also relies on individualized and multimodal contrastive learning. We apply CoGraCa to longitudinal dataset of healthy individuals consisting of resting-state functional MRI and cognitive measures acquired at multiple visits for each participant. The generated fingerprints effectively capture significant individual differences and outperform current single-modal and CCA-based multimodal models in identifying sex and age. More importantly, our encoding provides interpretable interactions between those two modalities.

1. Introduction

Longitudinal neuroimaging studies often repeatedly acquire functional MRI and neurocognitive performance measures of study participants to explore the connection between brain function and cognition and their development over time [11,13,16]. The investigations, however, are often hindered by the lack of computational tools linking such multi-modal, repeated measures. Despite the advances of machine learning in neuroimaging studies, existing models [6,32,33] are often designed to predict univariate outcome variables, which cannot characterize shared and dissociated neural bases underlying multiple cognitive domains (e.g., working memory, motor functions). Moreover, cross-sectional methods often fail to disentangle the consistent brain functional connectivity of a single subject across multiple visits, from the variations that exist between individuals [3,5].

One potential solution to relating multi-dimensional functional and cognitive measures is Canonical Correlation Analysis (CCA) [26], which has been successfully applied in a number of cross-sectional studies to identify reproducible brain-cognition mapping. Traditional CCA methods [26] primarily capture linear associations between modalities, which are not suitable for modeling the complex spatial characteristics inherent in brain connectivity. An approach that has been highly successful in inferring neural activity patterns in functional MRI are Graph Neural Networks (GNNs) [10,14,20,21], which have been coupled with the CCA framework to relate two augmented views derived from fMRI signals for spurious factor mitigation [21]. Thus, GNN-based CCA might provide a strong foundation for learning brain-cognition mapping, but it remains unclear how such mappings preserve inter-subject variability and intra-subject consistency.

In this work, we propose Contrastive Learning-based Graph Generalized Canonical Correlation Analysis (CoGraCa), aimed at encoding the correlation between brain functional connectivity and cognitive measurements at individual-level while characterizing brain functional differences to create personalized brain-cognition fingerprints that reflect the unique neural and cognitive landscapes of each person. We utilize a Graph Attention Network (GAT) to encode brain functional connectivity derived from resting-state functional MRI (rs-fMRI). The GAT is coupled with a generalized CCA (GCCA) [2], which jointly encodes brain function and cognitive scores so that the resulting brain functional networks are aligned with the cognitive data. To explicitly account for the inter-subject variability and intra-subject consistency, we further design two contrastive learning strategies: i) An individualized contrastive learning approach that regulates the graph embeddings both within and between subjects in the latent space, ensuring that the unique connectivity patterns of each subject are preserved while differentiating between subjects. ii) A longitudinal multi-modal contrastive learning that encourages the cross-modal alignment of brain connectivity and cognitive measures across different visits within each subject, maintaining the dynamic evolution of individualized brain-cognition correlation.

Our proposed CoCraCa is cross-validated on a dataset comprising 57 participants, totaling 93 visits containing both fMRI and cognitive measures. The generated “brain-cognition” fingerprints demonstrate significant individual differentiation. Validated through downstream sex and age classification task using these fingerprints, CoGraCa achieves higher accuracy scores in comparison to other state-of-the-art single-modal and CCA-based multimodal methods, underscoring its effectiveness in integrating brain connectivity with cognitive data for precise individual characterization. Importantly, CoGraCa enables interpretable correlations between modalities, identifying sex- and age-related functional connectivity and cognitive measures that align with established neuroscience research.

2. Method

Let S be the number of subjects and N be the number of visits across all subjects, with Ns representing the number of visits for subject s. Each visit i contains a set of cognitive measures 𝒞i and a connectivity matrix encoding the Pearson correlation between the fMRI signal of brain Regions of Interests (ROIs). The connectivity matrix is represented as a graph 𝒢i=(A𝒢i,D𝒢i) consisting of V nodes (representing the ROIs). A𝒢iRV×V is the adjacent matrix consisting only of positive correlations (anticorrelations are set to 0) [15,30]. D𝒢iRV×D is the attribute matrix, represented by each ROI’s “connection profile” of length D, as defined in [4]. Based on the set of pairs {G,C}={(𝒢1,𝒞1),(𝒢2,𝒞2),,(𝒢n,𝒞n)}, the goal of our approach is to learn a brain-cognition representation (or “Brain-Cognition fingerprints”) R={1,2,,n}. We determine the optimal R by regularizing Graph GCCA (Fig. 1 A) by individualized and multimodal contrastive learning (Fig. 1 B). We now describe these components in further detail.

Fig. 1.

Fig. 1.

Overview of our model. (A) Brain functional connectivity of each subject is encoded using graph attention networks (GAT) into graph embedding. The learned embedding (graph variables) and cognitive measures (cognitive variables) are mapped to a shared “brain-cognition” space via generalized canonical correlation analysis. The GAT weights are updated by optimizing the maximum correlation between two modalities. (B) An individualized contrastive learning differentiates inter-subjects brain connectivity and a multimodal contrastive learning to align brain connectivity and cognitive measures across multiple visits within subjects, capturing intra-subject and cross-modal dynamics.

Graph Generalized Canonical Correlation Analysis.

For each input connectivity graph 𝒢i, we adopt Graph Attention Layers (GAT) [28] to learn its encoding into a node embedding h𝒢iRV×r, where the embedding hpR1×r of node p is updated by aggregating the features of its 1-hop neighborhood nodes 𝒩p through self-attention mechanism. Specifically, the attention between p and its neighboring node q at layer k is calculated by: apqk(hpk,hqk)=Softmax(σ(mT[WhpkWhqk])), where denotes a concatenation operation, σ is a non-linear activation function ReLU, m is a trainable single-layer feed-forward neural network, and W is a trainable weight matrix. The node representation at layer k+1 will be further obtained by: hpk+1=σ(q𝒩papqkWhqk). The node embeddings {hp}pV of each 𝒢i from the last layer are further applied to a global mean pooling operation to obtain a set of graph variables h𝒢i. We aim to learn this encoding function fgraph(G) through maximizing its correlation with cognitive measures. Specifically, the representations {h𝒢i}𝒢iG learned from the complex brain connectivity, along with the cognitive measures, are treated as two sets of canonical variables that will be projected into a shared “brain-cognition” space to obtain R through GCCA [2,8]. Note, the cognitive measures are not subjected to any encoder to ensure its direct guidance to fMRI generation and integration.

The unsupervised CCA optimization is expressed as maximizing the sum of correlations between R and each modality, defined by the loss function: corr=RUbrainTfgraph(G)F2+RUcogTCF2, s.t. RRT=I. Ubrain and Ucog are linear transformation matrix (canonical loadings) of the two variables from brain connectivity and cognition measures. Note, R is the shared representation in the “brain-cognition” space, which is obtained by solving an eigenvalue problem. Following [2], for the sets of brain graphs G, we define the covariance matrix as Cov=fgraph(G)fgraph(G)T and obtain the positive semi-definite matrix Pbrain=fgraph(G)TCov1fgraph(G). Similarly, we obtain Pcog from C and we stack them as M=Pbrain+Pcog. Then, the eigenvectors of M will be constructed into R which maximally and linearly correlates non-linear transformations of brain functional connectivity and cognition measures. This optimization problem is solved by estimating the gradient of the objective on samples that are mapped through fgraph and using back-propagation to update weights within fgraph.

Individualized Contrastive Learning.

To capture the inherent individual variability present in brain functional connectivity, we design an individualized contrastive learning strategy where pairs of brain connectivity are constructed from all N visits across S subjects. Pairs from the same subject s are considered to be similar and thus are labeled as positive pairs (see Fig. 1B). Conversely, pairs from different individuals are likely to be dissimilar and are labeled negative accordingly. Specifically, given the sets of graph embeddings {h𝒢i}𝒢iG, we define a subject indicator yij=1, where yij=1 denotes a pair of graph embeddings h𝒢i and h𝒢j are from the same subject, otherwise, yij=0. The individualized contrastive loss can then be achieved by ind=1NiNjNyij=1logexp(sim(h𝒢i,h𝒢j)τ)kN,kiexp(sim(h𝒢i,h𝒢k)τ), where sim() denotes the cosine similarity and τ>0 is a temperature parameter that controls the separation of subjects. By doing so, graph embeddings from the same subject s are pulled closer together than embeddings from different subjects.

Multimodal Contrastive Learning.

Meanwhile, we have to ensure the above individual-level separation keeps the longitudinal differences within each subject. To achieve this goal, we apply a multimodal contrastive learning, inspired by CLIP [22], yet specifically tailored to within-subject pairs for our multimodal features, i.e., brain connectivity and cognitive measures, to capture this intra-subject and cross-modal dynamics. For subject sS with the number of visits Ns>1, given the paired brain graph 𝒢is and cognitive measures 𝒞is across Ns visits, we aim to maximize their similarity from the same visit and minimize the similarity from different visits within each subject. This multimodal contrastive learning is achieved by: mul=1SsSiNslogexp(sim(h𝒢is,𝒞is)τ)kNs,kiexp(sim(h𝒢is,𝒞ks)τ).

By accounting for both the brain connectivity variability between subjects and variability in correlation with cognitive measures within each subject, the model can derive a more individualized representation that integrates the longitudinal variations in brain connectivity specific to cognitive measures. The final objective function is defined as totalcorr+λ1ind+λ2mul, where λ1 and λ2 are trade-off parameters to balance the two contrastive learning procedures.

3. Experimental Results

Dataset.

Our study utilizes the SRI dataset (PIs: Pfefferbaum and Sullivan) consisting of rs-fMRI (3T GE Discovery MR750 scanner, 8-channel head coil, echo time=30ms, dwell-time=0.388ms, TR=2200ms, 2.5mm isotropic after upsampling) of 417 subjects (822 visits). Of them, 195 participants (275 visits) completed cognitive testing at the same visit as the rs-fMRI was acquired. The cognitive measurements are summarized in 16 domain-specific scores. Of the 195 subjects, 57 subjects (89 visits, age: 58.53±10.57 years) are normal controls with 21 females (33 visits) and 36 males (56 visits). Of each of the 89 rs-fMRI, the pipeline by [7] extracts connectivity matrix across 111 ROIs. Each entry in that matrix is the Pearson correlation between the rs-fMRI signals of two ROIs.

Implementation Details.

We implement the proposed model CoGraCa using PyTorch with the Adam optimizer and a learning rate of 0.001. Our graph encoder is composed of two GAT layers, with hidden units=32. The dimension of the node embedding r is set as 16 and the number of canonical variants (i.e., dimension of R) are set as 16. τ is set as 0.9 and λ1 and λ2 are set as 1.5 and 0.5, respectively. The model is trained for 1000 epochs using five-fold cross-validation with folds defined by subjects to ensure that all visits from a single subject are assigned to the same fold. For each test fold, the model is trained on the remaining data to optimize the model’s parameters and obtain canonical loadings Ubrain,Ucog. After training is completed, the “Brain-Cognition” representation R is generated for each sample from the test fold. Codes will be made available at https://github.com/Wangyixinxin/BrainCog.

3.1. Individual Variability of “Brain-Cognition” Fingerprints

We assess whether our derived representations could capture individual-specific features more effectively than other CCA-based methods.

Experimental Setup.

For comparison, we repeat the five fold cross-validation by applying Principal Component Analysis (PCA) and Independent Component Analysis (ICA) to the connectivity matrices before performing CCA analysis. PCA reduces the connectivity matrices to 544 independent components (which accounted for 95% of the data variance) and ICA to 20 independent components (chosen based on the best performance compared with 10,15,20,25 components). Separately for PCA and ICA, the components are fed into CCA in conjunction with cognitive measures to obtain a representation, labelled as PCA-CCA and ICA-CCA in our comparison. Finally, we run CoGraCa omitting contrastive learning (referred to as GraCa). For each model, we compute the similarity between each pair of representations both within subjects across different visits and between subjects across their visits using Pearson correlation.

Results.

The histograms in Fig. 2 for ICA-CCA and PCA-CCA show that the distributions of within-subject similarity (red) and between-subject similarity (blue) largely overlap, indicating a lack of individual distinctiveness in the representations generated by these models. The two distributions are much better separated by the representations generated by GraCa and CoGraCa, with individuals being significantly distinct from each other (Mann-Whitney U test, CoGraCa: p-value < 0.0001, GraCa: p-value < 0.0001). CoGraCa is also associated with a larger Wasserstein distance (0.45), i.e., larger distribution separation, compared with GraCa (0.39). We also measure the similarity between visits from the same and different subjects for 18 subjects with two visits in Fig. 2 (See GraCa and CoGraCa). The correlation matrices reveal that CoGraCa produces highly differentiable individualized representations, where each participant exhibits a high correlation with their own across visits (as reflected in the diagonal of correlation matrix) and low correlations from visits from other cohorts, leading to individualized “brain-cognition” fingerprints. GraCa yields highly similar representations within individual participants but also displays a higher degree of similarity to other subjects than CoGraCa.

Fig. 2.

Fig. 2.

Histograms show the similarity in the representations between visits within subject (intra-subject, red) vs. across subjects (inter-subject, blue). Of all methods, CoGraCa model produced the most individualized representation, i.e., the intra-subject similarity is relatively high compared to the inter-subject similarity.

3.2. Validating Fingerprints with Downstream Tasks

We conduct a quantitative analysis to determine if the integrated “brain-cognition” representations from CoGraCa enhance the accuracy of identifying specific individual characteristics compared to GraCa and other CCA-based multimodal methods that aim to correlate brain function and cognition. Based on the trained model for each of the 5 test folds, the representations for both the training and testing sets are generated for downstream tasks without any additional fine-tuning of the model.

The downstream tasks focus on predicting age (older vs younger) and sex as brain function and cognition often differentiate between these cohorts [1,27]. Given the relatively small sample size of our data set, the task of age prediction is confined to younger (≤60 years, 47 visits, 38.3% are females ) versus older (¿60 years, 42 visits, 35.7% are females) as in [24]. The sex ratio is similar between those two cohorts according to Chi-square test (p=0.97). For identifying sex, males (age: 59.21±10.01 years) and females (age: 57.38±11.68 years) have similar age (t-test, p=0.45). The classification model is a multi-layer perceptron (MLP) containing two fully-connected layers of dimension 64 and 32 with ELU and a dropout rate of 0.5. Due to the limitation of the small data size, we repeat the cross-validation of the MLP 10 times using different seed points for initialization. For each cross-validation, we record the balanced accuracy (BACC).

Baseline.

We compare our method against single modality-based (fMRI-only and cognition-only) and multimodal CCA-based approaches. With respect to fMRI-only, we derive representations via ICA (20 components) and perform classification with MLP. Supervised methods include vanilla GCN [12], BrainGNN [14], and GAT-LI [10]. Multimodal methods were PCA-CCA and ICA-CCA. We also employ the state-of-the-art supervised SDGCCA approach [18]. Each method is trained and tested using the same experimental setup as for GraCa and CoGraCa.

Results.

Table 1 lists the average and standard deviation accuracy scores across the 10 runs. Solely relying on correlation matrices (i.e., fMRI) results in relatively low scores regardless of representation due to the low signal-to-noise ratio of that modality. The accuracy scores of the multimodal baseline methods (PCA-CCA, ICA-CCA, and SDGCCA) are higher but also lower than only relying on the cognitive measures, which suggests that they are not able to properly integrate the multimodal data. Our method achieves that goal and has the highest BACC for sex and age. All its accuracy scores are higher than GraCa, which aligns with our expectation that multimodal contrastive learning effectively maintains the longitudinal intra-individual distinctions.

Table 1.

Balanced accuracies on sex and age prediction tasks. Results were averaged across 5 folds and run 10 times with random seeds. The best results are shown in bold.

Sex Classification Age Classification
Functional MRI-only
ICA 56.05±2.3 69.38±2.24
Vanilla GCN [12] 54.73±2.43 58.75±2.57
BrainGNN [14] 61.25±2.12 63.57±1.09
GAT-LI [10] 58.22±1.59 62.73±1.99
Cognition-only 73.45±1.24 71.81±2.01
Multimodal
PCA-CCA 72.93±1.56 71.31±2.01
ICA-CCA 65.42± 2.27 63.31±2.14
SDGCCA [18] 69.11±1.94 65.7±2.57
GraCa 74.14±1.46 70.30±1.52
CoGraCa 74.90±1.78 73.26±1.21

3.3. Functional Connectivity and Cognition Interpretation

A significant achievement of our approach is the extraction of cognitive-related functional connectivities and identifying the most relevant features from both modalities linked to sex or age distinctions. Using SHapley Additive exPlanations (SHAP) [25], we identify key features in “brain-cognition” representations that drive predictions. These features correspond to specific CCA components, leading us to extract canonical loadings Ubrain and Ucog that highlight the significance of graph and cognitive variables.

By combining graph variables with the attention matrices obtained from GAT that indicate the learned connectivity from CoGraCa, we derive the functional connectivity pattern identifying sex and age (Fig. 3). Color-coding of brain functional regions is defined according to [7] and their contribution is encoded by the node size. With respect to sex, our method identifies dense connectivity within Orbito-frontal Cortex (OFC), Frontotemporal (FT), Posterior Cingulum (PCC), and Hippocampal (HPC) regions, which is in line with the literature [29,31]. For age distinctions, significant connections involve Temporo-occipital (TO), HPC, OFC, and Superior Frontal (SF) regions, resonating with neuroscience findings on age-related neural alterations on Parahippocampal, Occipital and Prefrontal areas [9,23]. See Supplemental Fig. S1 and Fig. S2 for additional insights on the robustness of connectivities across folds and Fig. S3 for connectivity patterns from other CCA components. These brain functional patterns interacting with cognitive measures are in line with the literature: Alternate Finger Tapping Test (AFT) is important for identifying sex [19] and, for age, the Wechsler Memory Scale-Revised Test (WMSR) [17], which assesses visual/logical memory. Interestingly, WMSR is correlated correctly with functional regions related to memory (e.g. TO and HPC) revealing that CoCraCa provides a meaningful integration between brain function and cognition.

Fig. 3.

Fig. 3.

Identified functional connectivity and rank-ordered positive loadings of cognitive variables of task-related CCA component, in line with [9,17,19,23,29,31]. Functional regions and cognitive variables were detailed in Supplemental Fig. S4.

4. Conclusion

In this work, we introduced a novel unsupervised approach, CoGraCa, to accurately encode brain function coupled with cognition as captured by longitudinal rs-fMRI and cognitive testing. CoGraCa generates “brain-cognition” fingerprints capturing the unique neural and cognitive landscapes of individuals across time by coupling Graph GCCA with individualized and multimodal contrastive learning. We measure the accuracy of CoGraCa by using the encoding to identify the sex and age in individuals. Our multimodal encoding has a higher balanced accuracy than several state-of-the-art representations. More importantly, CoGraCa allows us to identify the brain-cognition relationship important for these tasks.

Supplementary Material

Supplementary Material

Acknowledgments.

The work was partly funded by the National Institute of Health (DA057567, AA05965, AA017347, AA010723, MH129694, MH130956, AG080425, AA028840), the DGIST Joint Research Project, the 2024 Stanford HAI Hoffman-Yee Grant, the Stanford HAI-Google Cloud Credits Award, BBRF Young Investigator Grant and the Lehigh University FIG (FIGAWD35) and CORE (001250) grants.

Footnotes

Supplementary Information The online version contains supplementary material available at https://doi.org/10.1007/978-3-031-78761-4_3.

References

  • 1.Bachmann D., et al. : Age-, sex-, and pathology-related variability in brain structure and cognition. Trans. psychiatry 13(1) (2023) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Benton A, Khayrallah H, Gujral B, Reisinger DA, Zhang S, Arora R: Deep generalized canonical correlation analysis. In: Proceedings of the 4th Workshop on Representation Learning for NLP, Florence, Italy, August 2, 2019, pp. 1–6. Association for Computational Linguistics; (2019) [Google Scholar]
  • 3.Bijsterbosch J, Harrison S, Duff E, Alfaro-Almagro F, Woolrich M, Smith S: Investigations into within-and between-subject resting-state amplitude variations. Neuroimage 159, 57–69 (2017) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Cui H., et al. : BrainGB: a benchmark for brain network analysis with graph neural networks. IEEE Trans. Med. Imaging 42(2), 493–506 (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Finn ES, Todd Constable R: Individual variation in functional brain connectivity: implications for personalized approaches to psychiatric disease. Dialogues Clin. Neurosci 18(3), 277–287 (2016) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Gao M., et al. : Multimodal brain connectome-based prediction of suicide risk in people with late-life depression. Nature Mental Health 1(2), 100–113 (2023) [Google Scholar]
  • 7.Honnorat N., et al. : Alcohol use disorder and its comorbidity with HIV infection disrupts anterior cingulate cortex functional connectivity. Biol. Psychiatry: Cogn. Neurosci. Neuroimaging 7(11), 1127–1136 (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Horst P.: Generalized canonical correlations and their applications to experimental data. J. Clin. Psychol 17, 331–47 (1961) [DOI] [PubMed] [Google Scholar]
  • 9.Hsieh S, Yang MH, Yao ZF: Age differences in the functional organization of the prefrontal cortex: analyses of competing hypotheses. Cereb. Cortex 33(7), 4040–4055 (2023) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Hu J, Cao L, Li T, Dong S, Li P: GAT-LI: a graph attention network based learning and interpreting method for functional brain network classification. BMC Bioinformatics 22(1), 1–20 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Ji J., et al. : Mapping brain-behavior space relationships along the psychosis spectrum. Elife 10 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Kipf TN, Welling M: Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings (2017) [Google Scholar]
  • 13.Lee K., et al. : Human brain state dynamics reflect individual neuro-phenotypes. bioRxiv (2023) [Google Scholar]
  • 14.Li X., et al. : BrainGNN: interpretable brain graph neural network for fMRI analysis. Med. Image Anal 74, 102233 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Li Y, Wei Q, Adeli E, Pohl KM, Zhao Q: Joint graph convolution for analyzing brain structural and functional connectome. In: Medical Image Computing and Computer Assisted Intervention - MICCAI 2022 - 25th International Conference, Singapore, September 18-22, 2022, Proceedings, Part I. Lecture Notes in Computer Science, vol. 13431, pp. 231–240. Springer; (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Luo L., et al. : Patterns of brain dynamic functional connectivity are linked with attention-deficit/hyperactivity disorder-related behavioral and cognitive dimensions. Psychol. Med pp. 1–12 (2023) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Margolis RB, Scialfa CT: Age differences in Wechsler memory scale performance. J. Clin. Psychol 40(6), 1442–1449 (1984) [DOI] [PubMed] [Google Scholar]
  • 18.Moon S, Hwang J, Lee H: SDGCCA: supervised deep generalized canonical correlation analysis for multi-omics integration. J. Comput. Biol 29(8), 892–907 (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Morrison MW, Gregory RJ, Paul JJ: Reliability of the finger tapping test and a note on sex differences. Percept. Mot. Skills 48(1), 139–142 (1979) [DOI] [PubMed] [Google Scholar]
  • 20.Nerrise F, Zhao Q, Poston KL, Pohl KM, Adeli E: An explainable geometric-weighted graph attention network for identifying functional networks associated with gait impairment. In: Medical Image Computing and Computer Assisted Intervention - MICCAI 2023 - 26th International Conference, Vancouver, BC, Canada, Proceedings, Part II. Lecture Notes in Computer Science, vol. 14221, pp. 723–733. Springer; (2023) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Peng L, Wang N, Xu J, Zhu X, Li X: GATE: graph CCA for temporal self-supervised learning for label-efficient fMRI analysis. IEEE Trans. Med. Imaging 42(2), 391–402 (2022) [DOI] [PubMed] [Google Scholar]
  • 22.Radford A., et al. : Learning transferable visual models from natural language supervision. In: Proceedings of the 38th International Conference on Machine Learning, 18-24 July 2021, Virtual Event. Proceedings of Machine Learning Research, vol. 139, pp. 8748–8763 (2021) [Google Scholar]
  • 23.Ramanoël S, York E, Le Petit M, Lagrené K, Habas C, Arleo A: Age-related differences in functional and structural connectivity in the spatial navigation brain network. Front. Neural Circuits 13, 69 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Statsenko Y., et al. : Predicting age from behavioral test performance for screening early onset of cognitive decline. Front. Aging Neurosci 13, 661514 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Sundararajan M, Najmi A: The many Shapley values for model explanation. In: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event. Proceedings of Machine Learning Research, vol. 119, pp. 9269–9278 (2020) [Google Scholar]
  • 26.Thompson B.: Canonical correlation analysis. Read. Underst. MORE Multivar. Stat 285–316 (2000) [Google Scholar]
  • 27.Tomasi D, Volkow ND: Measures of brain connectivity and cognition by sex in us children. JAMA Netw. Open 6(2), e230157–e230157 (2023) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Velickovic P., et al. : Graph attention networks. stat. 1050(20), 10–48550 (2017) [Google Scholar]
  • 29.Weis S, Hodgetts S, Hausmann M: Sex differences and menstrual cycle effects in cognitive and sensory resting state networks. Brain Cogn. 131, 66–73 (2019) [DOI] [PubMed] [Google Scholar]
  • 30.Weissenbacher A, Kasess C, Gerstl F, Lanzenberger R, Moser E, Windischberger C: Correlations and anticorrelations in resting-state functional connectivity MRI: a quantitative comparison of preprocessing strategies. Neuroimage 47(4), 1408–1416 (2009) [DOI] [PubMed] [Google Scholar]
  • 31.Zhang C, Dougherty CC, Baum SA, White T, Michael AM: Functional connectivity predicts gender: evidence for gender differences in resting brain connectivity. Hum. Brain Mapp 39(4), 1765–1776 (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Zhang Y., et al. : Identification of psychiatric disorder subtypes from functional connectivity patterns in resting-state electroencephalography. Nature biomedical engineering 5(4), 309–323 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Zhu X, Du X, Kerich M, Lohoff FW, Momenan R: Random forest based classification of alcohol dependence patients and healthy controls using resting state MRI. Neurosci. Lett 676, 27–33 (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material

RESOURCES