Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Sep 2.
Published in final edited form as: Graph Learn Med Imaging (2019). 2019 Nov 14;11849:88–95. doi: 10.1007/978-3-030-35817-4_11

DeepBundle: Fiber Bundle Parcellation with Graph Convolution Neural Networks

Feihong Liu 1,2, Jun Feng 1,*, Geng Chen 2, Ye Wu 2, Yoonmi Hong 2, Pew-Thian Yap 2,*, Dinggang Shen 2,*
PMCID: PMC8411944  NIHMSID: NIHMS1717207  PMID: 34485996

Abstract

Parcellation of whole-brain tractography streamlines is an important step for tract-based analysis of brain white matter microstructure. Existing fiber parcellation approaches rely on accurate registration between an atlas and the tractograms of an individual, however, due to large individual differences, accurate registration is hard to guarantee in practice. To resolve this issue, we propose a novel deep learning method, called DeepBundle, for registration-free fiber parcellation. Our method utilizes graph convolution neural networks (GCNNs) to predict the parcellation label of each fiber tract. GCNNs are capable of extracting the geometric features of each fiber tract and harnessing the resulting features for accurate fiber parcellation and ultimately avoiding the use of atlases and any registration method. We evaluate DeepBundle using data from the Human Connectome Project. Experimental results demonstrate the advantages of DeepBundle and suggest that the geometric features extracted from each fiber tract can be used to effectively parcellate the fiber tracts.

1. Introduction

Diffusion MRI provides valuable insights into the 3D geometric structure of brain neural fiber tracts, allowing tract-based analysis (TBA) of brain white matter microstructure in vivo [1]. TBA usually focuses on specific fiber bundles that are extracted from the whole-brain tractograms. Therefore, effective methods for segmenting whole-brain tractograms into fiber bundles of interest are desirable.

Existing fiber parcellation approaches can be divided into two categories: ROI-based and streamline-based. ROI-based approaches [2,3] first parcellate the brain surface based on an atlas and then use the parcellation ROIs to extract different fiber bundles. Streamline-based approaches [4,5] employ streamline registration methods to directly transfer the fiber parcellation information from an atlas to tractograms of an individual. These methods are directly affected by registration accuracy, which is, however, hard to guarantee in practice due to factors such as inter-subject variability, noise, and artifacts. Moreover, the registration procedure is usually time-consuming, making it unsuitable for large-scale studies and real-time applications. Recently, Wasserthal et al. [6] proposed to predict the binary mask of a fiber bundle from the whole brain fiber peaks using convolutional neural networks. This method avoids the registration procedure but is limited to generating a binary tract segmentation mask rather than the parcellation labels of the fiber tracts. In addition, this method utilizes the fiber orientation information at each voxel rather than the actual geometries of the fiber tracts.

In this work, we propose a deep-learning approach, called DeepBundle, to parcellate whole-brain tractograms without the time-consuming registration. Specifically, we view the coordinates of the points on each fiber tract as functions defined on a graph. The point coordinates are then fed to a graph convolutional neural networks (GCNN) to extract the latent geometric features of fiber tracts for bundle recognition. Our network is trained end-to-end with the point coordinates as inputs and parcellation labels as outputs, thus avoiding having to manually craft features for the purpose of bundle parcellation. During the testing stage, the point coordinates extracted from fiber tracts of the whole brain are directly fed to the trained networks, thus avoiding the use of atlases and any registration method. Extensive experiments performed using data from the Human Connectome Project (HCP) indicate that DeepBundle yields fiber parcellation labels with remarkably improved accuracy, confirming that the geometric features extracted by GCNNs are effective for fiber bundle parcellation.

2. Methods

In this section, we will first show how fiber streamlines can be represented using graphs, and then, introduce the theory of spectral graph convolution and graph pooling. Finally, we will describe our network in detail.

2.1. Graph Representation of Fiber Tracts

Considering an undirected graph as G=(V,E,W), V={v1,,vn} is a set of n vertices, EV×V is the edge set, and W=(w(i,j)) is the n × n adjacency matrix, which is symmetric, i.e., w(i,j) = w(j,i). We uniformly sample n points on each fiber tract, and thus this discrete fiber tract can be represented by a line-type graph, where the edge weights are given by

wi,j={1, if i,j are connected,0,otherwise. (1)

The graph G now encodes the geometric relationships between sampling points on the fiber tract [7]. We then view the point coordinates extracted from a fiber tract as graph-structured data.

2.2. Spectral Graph Convolution

To extract the underlying geometry-invariance features of each fiber tract, we employ spectral graph convolution [8], which generalizes conventional convolution in Euclidean space using graph Fourier transformation. The transformation relies on the eigendecomposition of the graph Laplacian Δ, which is defined as

Δ=ΦΛΦT=InD1/2WD1/2, (2)

Where Φ=(φ1,,φn) is a matrix of orthonormal eigenvectors (ΦTΦ=In), Λ=diag(λ1,,λn) is a diagonal matrix of corresponding eigenvalues, In is an identity matrix, and D=jiw(i,j) is the degree matrix. Such eigenvectors (φ1,,φn) can be interpreted as the Fourier bases, and ΦT is used to transform the features from the spatial domain to spectral domain. The graph Fourier transformation is formulated by

f* g=Φ(ΦTg)(ΦTf)=Φdiag(g^1,,g^n)f^, (3)

where f = (f1, ⋯, fn)T is the input signal, which denotes the geometric features of one streamline on the vertices of graph G. Its Fourier transformation is given by f^=(ΦTf); g is a convolutional filter in the spatial domain, and the spectral convolution can be defined as element-wise product (ΦTg)(ΦTf), which can also be written by diag (g^1,,g^n)f^. Hereby, diag (g^1,,g^n) is the corresponding convolutional filter in the spectral domain.

Utilizing the spectral definition of the convolution, GCNNs generalize CNNs to graphs. The -th spectral convolution layer can be written by

fk(+1)=ξ(Φk=1pdiag(g^(k,k,1)(),,g^(k,k,n)())ΦTfk()), (4)

where F()=(f1(),,fp()) and F(+1)=(f1(+1),,fq(+1)) are the n × p and n × q matrices, representing the input and output features of -th spectral convolution layer; fk() and fk(+1) denote the k′-th column of the input matrix and the k-th column of the output matrix. Φ=(φ1,,φn) is an n × n matrix, and diag (g^(k,k,1)(),,g^(k,k,n)()) is a n × n diagonal matrix which denotes learnable filters in the spectral domain; and ξ denotes the nonlinearity, e.g., ReLU.

2.3. Fast Graph Pooling

We utilize the Graclus [9,10] to coarsen the input graph into multi-scales, which is similar to the pooling operation in conventional CNNs. In practice, the multi-scale graph is organized as a binary tree, which becomes coarse from the leaf layer to the root layer. When constructing this tree, Graclus introduces fake nodes, rearranges the nodes of the streamline, and thus, we can perform graph pooling using fast 1D spatial pooling. Cousin nodes in one layer are aggregated into one parent node in the upper layer of the binary tree. The pooling process is illustrated in Fig. 1.

Fig. 1:

Fig. 1:

The architecture of the employed spectral GCNNs. Red arrows denote the spectral convolution layers, blue arrows denote the fast max-pooling layers, a green arrow denotes one fully connected layer, and the yellow arrow denotes the softmax layer.

2.4. Network Architecture of DeepBundle

Fig. 1 illustrates the network architecture of DeepBundle. We employ a spectral GCNNs with the architecture of GC32-P2-GC64-P2-FC512, where GCc denotes a graph convolution layer which has c filters, P2 denotes a graph pooling layer with a factor 2, and FC512 denotes a fully connected layer with 512 hidden nodes. Each graph convolutional layer is activated by the ReLU function. The last layer is the softmax regression layer, and we employ the cross-entropy loss with an l2 regularization term. We separately trained networks for segmenting different fiber bundles of interest from the whole-brain tracts.

In Fig. 1, the length of the input is increased from n to k due to the introduction of fake nodes (black dots) by Graclus. We employed 32 and 64 filters in the first convolution and the second convolution layer respectively. In the pooling layer, two neighbor nodes are aggregated into one. The input with a length k is reduced to k/4 after going through two max-pooling layers.

3. Results

3.1. Implementation Details

For evaluation, we used the publicly available HCP fiber tract dataset [11]. This dataset contains 105 subjects. Each subject has 72 fiber bundles, containing streamlines of different lengths. We uniformly resampled them to have the same number of points, i.e., 100 points. We then represented all streamlines using a common graph. In our experiments, we randomly selected 25 subjects from the database for training, 2 for validation, and 11 for testing. In preparing the training data, a positive label is assigned to all streamlines from a bundle of interest. The negative samples are collected by 1) selecting streamlines from spatially neighboring bundles, and 2) randomly selecting an equal number of streamlines from all other bundles. For testing, all streamlines of each testing subject were fed separately into the trained network.

3.2. Experimental Results

Fig. 2 shows the fiber parcellation results of one randomly selected subject (#623844). 12 bundles of interest are parcellated from more than one million whole-brain fiber tracts. Although fiber bundles in the left and right hemispheres may share similar shapes, DeepBundle is able to distinguish between them. For some small-size bundles, e.g., FX, which have only hundreds of streamlines, DeepBundle consistently gives promising parcellation results. In addition, we show the quantitative parcellation results of the 12 fiber bundles in Table 1, further confirming the conclusions that can be drawn from Fig. 2.

Fig. 2:

Fig. 2:

Qualitative results of 12 fiber bundles: Corticospinal tract (CST), Commissure anterior (CA), Corpus callosum (Rostrum (CC 1), Genu (CC 2), Isthmus (CC 6)), Uncinate fascicle (UF), Arcuate fascicle (AF), Fornix (FX). Colored streamlines indicate the true positive (TP) streamlines, while black and white denote the false positive (FP) and false negative (FN) streamlines, respectively.

Table 1:

Classification results for 12 bundles of subject #623844.

Counts CST left CST right CC 1 CC 2 CC 6 CA
TP 4755 7726 4419 51967 30082 1330
FP 244 395 143 1100 602 0
FN 54 382 29 153 320 0
Counts UF left UF right AF left AF right FX left FX right
TP 3535 4916 54448 42635 94 123
FP 0 0 199 495 6 7
FN 1 3 190 220 1 1

Table 2 shows the quantitative results computed across 10 testing subjects. A large precision/recall value indicates more accurate fiber parcellation. It can be observed that DeepBundle achieves high precision and recall values for all bundles of interest, indicating promising fiber parcellation accuracy. It is interesting to note that the recall numbers of left CST and right CST are smaller than the other fiber bundles. This is due to the fact their neighboring bundles, such as POPT and FPT, are very similar to CST in shape. Such streamlines induce ambiguities in fiber parcellation, even when performed manually.

Table 2:

Classification performance (mean ± standard deviation in %) on 12 fiber bundles from 11 subjects.

CST left CST right CC 1 CC 2 CC 6 CA
Precision 90.5 ± 7.1 91.1 ± 7.9 96.5 ± 2.1 97.6 ± 2.2 95.6 ± 2.3 99.9 ± 0.1
Recall 88.4 ± 11.6 86.6 ± 11.6 95.5 ± 4.3 98.4 ± 1.3 98.1 ± 1.8 100 ± 0.0
UF left UF right AF left AF right FX left FX right
Precision 99.7 ± 0.6 99.8 ± 0.2 99.2 ± 1.0 99.4 ± 0.6 87.3 ± 12.7 90.7 ± 14.1
Recall 99.2 ± 2.1 99.6 ± 0.4 99.4 ± 0.8 96.5 ± 2.8 97.5 ± 3.9 96.6 ± 3.8

We also compared DeepBundle with a popular method called RecoBundles [12]. We mapped all parcellated fiber streamlines to created volumetric visitation maps and computed their Dice scores with respect to the gold standard. The results, shown in Table 3, indicate that DeepBundle significantly improves the Dice score and outperforms RecoBundles for all bundles of interest. RecoBundles gives significantly lower Dice scores CA, left FX, and right FX. DeepBundle yields relatively smaller Dice scores for left CST and right CST, but still significantly outperforms RecoBundles.

Table 3:

Dice scores (mean ± standard deviation in %) of 12 fiber bundles from 11 testing subjects.

CST left CST right CC 1 CC 2 CC 6 CA
RecoBundles 45.3 ± 9.7 40.6 ± 6.1 51.3 ± 13.2 63.6 ± 8.6 60.7 ± 6.31 25.2 ± 16.1
DeepBundle 80.7 ± 10.1 86.9 ± 5.1 93.0 ± 3.3 96.8 ± 1.2 97.1 ± 1.4 99.1 ± 1.6
UF left UF right AF left AF right FX left FX right
RecoBundles 46.9 ± 6.3 52.1 ± 7.7 61.3 ± 9.1 63.1 ± 12.7 8.1 ± 2.8 9.5 ± 3.6
DeepBundle 96.4 ± 5.1 98.1 ± 1.3 95.8 ± 3.8 96.2 ± 4.0 82.4 ± 9.3 86.5 ± 8.8

4. Conclusion

In this paper, we have proposed a framework for fiber bundle parcellation using a GCNN. Our method directly predicts a tract parcellation label from the point coordinates extracted from a fiber tract. GCNNs are capable of extracting robust geometric features for tract parcellation in an end-to-end manner. Experiments on HCP data demonstrate that our method, DeepBundle, yields promising tract bundle parcellation results with high precision and recall rates. DeepBundle also achieves much higher Dice scores compared with RecoBundles. Our results also suggest that DeepBundle is even capable of effectively parcellating small tract bundles.

Acknowledgment

This work was supported in part by NIH grants NS093842 and the Xi’an Science and Technology Project funded by the Xi’an Science and Technology Bureau under Grant 201805060ZD11CG44.

References

  • 1.Ciccarelli O, Catani M, Johansen-Berg H, Clark C, Thompson A: Diffusion-based tractography in neurological disorders: concepts, applications, and future developments. The Lancet Neurology 7(8) (2008) 715–727 [DOI] [PubMed] [Google Scholar]
  • 2.Yendiki A, Panneck P, Srinivasan P, Stevens A, Zöllei L, Augustinack J, Wang R, Salat D, Ehrlich S, Behrens T, et al. : Automated probabilistic reconstruction of white-matter pathways in health and disease using an atlas of the underlying anatomy. Frontiers in Neuroinformatics 5 (2011) 23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Wassermann D, Makris N, Rathi Y, Shenton M, Kikinis R, Kubicki M, Westin CF: The white matter query language: a novel approach for describing human white matter anatomy. Brain Structure and Function 221(9) (2016) 4705–4721 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Garyfallidis E, Ocegueda O, Wassermann D, Descoteaux M: Robust and efficient linear registration of white-matter fascicles in the space of streamlines. NeuroImage 117 (2015) 124–140 [DOI] [PubMed] [Google Scholar]
  • 5.Zhang F, Wu Y, Norton I, Rigolo L, Rathi Y, Makris N, O’Donnell LJ: An anatomically curated fiber clustering white matter atlas for consistent white matter tract parcellation across the lifespan. Neuroimage 179 (2018) 429–447 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Wasserthal J, Neher P, Maier-Hein KH: TractSeg - fast and accurate white matter tract segmentation. NeuroImage 183 (2018) 239–253 [DOI] [PubMed] [Google Scholar]
  • 7.Bronstein MM, Bruna J, LeCun Y, Szlam A, Vandergheynst P: Geometric deep learning: Going beyond euclidean data. IEEE Signal Processing Magazine 34(4) (2017) 18–42 [Google Scholar]
  • 8.Bruna J, Zaremba W, Szlam A, Lecun Y: Spectral networks and locally connected networks on graphs. In: International Conference on Learning Representations (ICLR). (2014) [Google Scholar]
  • 9.Dhillon IS, Guan Y, Kulis B: Weighted graph cuts without eigenvectors: A multilevel approach. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 29(11) (2007) 1944–1957 [DOI] [PubMed] [Google Scholar]
  • 10.Defferrard M, Bresson X, Vandergheynst P: Convolutional neural networks on graphs with fast localized spectral filtering. In: Advances in Neural Information Processing Systems (NeurIPS). (2016) 3844–3852 [Google Scholar]
  • 11.Wasserthal J, Neher PF, Maier-Hein KH: Tract orientation mapping for bundle-specific tractography. In: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer; (2018) 36–44 [Google Scholar]
  • 12.Garyfallidis E, Côté MA, Rheault F, Sidhu J, Hau J, Petit L, Fortin D, Cunanne S, Descoteaux M: Recognition of white matter bundles using local and global streamline-based registration and clustering. NeuroImage 170 (2018) 283–295 [DOI] [PubMed] [Google Scholar]

RESOURCES