Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Mar 18.
Published in final edited form as: Med Image Comput Comput Assist Interv. 2023 Oct 1;14223:354–363. doi: 10.1007/978-3-031-43901-8_34

Shape-Aware 3D Small Vessel Segmentation with Local Contrast Guided Attention

Zhiwei Deng 1,2, Songnan Xu 1,2, Jianwei Zhang 1,2, Jiong Zhang 3, Danny J Wang 1, Lirong Yan 4, Yonggang Shi 1,2
PMCID: PMC10948105  NIHMSID: NIHMS1930492  PMID: 38500803

Abstract

The automated segmentation and analysis of small vessels from in vivo imaging data is an important task for many clinical applications. While current filtering and learning methods have achieved good performance on the segmentation of large vessels, they are sub-optimal for small vessel detection due to their apparent geometric irregularity and weak contrast given the relatively limited resolution of existing imaging techniques. In addition, for supervised learning approaches, the acquisition of accurate pixel-wise annotations in these small vascular regions heavily relies on skilled experts. In this work, we propose a novel self-supervised network to tackle these challenges and improve the detection of small vessels from 3D imaging data. First, our network maximizes a novel shape-aware flux-based measure to enhance the estimation of small vasculature with non-circular and irregular appearances. Then, we develop novel local contrast guided attention(LCA) and enhancement(LCE) modules to boost the vesselness responses of vascular regions of low contrast. In our experiments, we compare with four filtering-based methods and a state-of-the-art self-supervised deep learning method in multiple 3D datasets to demonstrate that our method achieves significant improvement in all datasets. Further analysis and ablation studies have also been performed to assess the contributions of various modules to the improved performance in 3D small vessel segmentation. Our code is available at https://github.com/dengchihwei/LCNetVesselSeg.

Keywords: Small vessel, Shape-aware flux, Local contrast

1. Introduction

The automated detection and analysis of small vessels from non-invasive imaging data is critical for many clinical studies such as the research on cerebral small vessel disease(CSVD) [3], which is the most common vascular cause of dementia in Alzheimer’s disease and related dementia (ADRD) [5]. According to [12], a brain vessel with a diameter less than 0.5mm is considered as a small vessel by most pathologists. Fortunately, with the recent advances in MRA at 7-Tesla 7 and black-blood MRI at 3- and 7-Tesla [10], it is now possible to detect the small cerebral vessels directly. Although the segmentation of large vascular structures has been well studied for many years [4, 6, 8, 11, 15], accurate and reliable small vessel segmentation remains a challenging task. In contrast to large vessels, small vessels usually exhibit the following two main characteristics(Fig. 1). (1) Its cross-section only occupies a few image voxels due to the limited resolution of imaging techniques such as the magnetic resonance angiography(MRA). This makes the regular assumption of tube-like shape for vessels often does not hold. (2) Small vessels often have weak intensities and low contrasts, which would be easily affected by noise or surrounding backgrounds. These characteristics are generally not well modeled by existing methods for vessel detection [4, 6, 8, 11, 15].

Fig. 1. Challenges in small vessel detection.

Fig. 1.

(a) A high-resolution MRA image patch (0.2 × 0.2 × 0.4 mm) acquired by a 7T Simens Terra scanner. (b) A black-blood MRI image patch (0.5 × 0.5 × 0.5 mm) acquired by a Siemens 3T Prisma scanner. In each case, a maximal(minimal) intensity projection (left) and two cross sections of small vessels (right) were plotted. In (a) and (b), each cross-section corresponds to the line of the same color overlaid on the left panel (green: irregular appearances of small vessel cross-sections, red: low contrasts of small vessel cross-sections).

Traditional vesselness filters typically characterize blood vessels based on hand-crafted features [4, 8, 15]. The inherent assumption about the regularity of tube-like vessel geometry, however, makes them sub-optimal for small vessel segmentation. In addition, most Hessian-based filters rely on complicated pre-processing including smoothing for the calculation of second derivatives, which can further weaken or even eliminate the contrasts of small vascular structures. While deep learning methods have been successfully applied in various segmentation tasks, large-scale annotated labels are hard to obtain to train supervised networks for the segmentation of highly variable small vessels. To overcome this challenge, a self-supervised deep learning approach was recently proposed that combines geometric models and deep neural networks to learn vessel flow directions [6], but it also assumes a circular tube-like vessel shape and is thus limited in segmenting arbitrary-shaped small vessel structures.

In this work, we propose a novel self-supervised network that focus on challenges in small vessel segmentation with the following contributions: (1) Instead of assuming ideal tube-shaped vessels like [6, 8], we propose an adaptive scheme for shape-aware estimation of oriented fluxes to model the irregular (non-circular) cross-section profiles of small vessels. (2) We propose the Local Contrast Attention (LCA) module based on a novel local contrast measure to enhance the small vessel pattern and suppress the background clutter simultaneously. (3) We propose a novel unsupervised learning framework that considers the characteristics of small vessel structures in relatively limited resolution for the first time. Comprehensive experiments show promising improvements in 3D datasets of multiple modalities compared with previous unsupervised approaches.

2. Method

Our proposed framework for small vessel detection is shown in Fig. 2, a U-shaped network augmented with multiple novel LCA and LCE modules to learn a general representation of irregular-shaped small vessels by optimizing a self-supervised loss of shape-aware flux. Formally, let  ΩR3  denote the image domain. To characterize the irregular shape of the small vessel at each point  xΩ, we estimate a principal vessel direction  ρx  and a set of radii values  R(x)=ri(x)i=1,2,,m  that represent the vessel radius along  m  sampling directions on the unit sphere  S2. Based on the estimated  R(x)  and  ρx, we can compute the vesselness score  fvs  at  x, which represents the likelihood of  x  being a vascular structure. The proposed network is trained to maximize the vessel score across  Ω. Next we describe in detail the shape-aware flux and the local contrast guided attention modules to enhance the detection of small vessels.

Fig. 2. Network overview.

Fig. 2.

Our model jointly estimate the vessel directions and shapes using a vector field  P  representing the principal direction of tubular structures and a collection of scalar fields  R  denoting the radius of the vessel at different directions.

Shape-Aware Flux.

Given an input image  I, our proposed network will generate three outputs:  P,R=r1,r2,,rm  and the reconstructed image  Iˆ. We denote  P  as a vector field that represents the principal direction for every point  xΩ  and  R  as a collection of scalar fields, where  riR  is a scalar field over  Ω  that estimates the vessel radius along  diS2(i=1,2,,m). A projected flux response along a direction  ρ, which generalizes conventional flux measures for circular shaped tubes [8], can be defined at  x  as follows:

f(x,R(x),ρ)=-1mivx+hiρρdi (1)

after discretization and normalization, where  v()  is the image gradient,  hi=ri(x)di   and  ri(x)R(x). The vesselness score of  x  can then be computed as

fvsx,R(x),ρ1=-fx,R(x),ρ2-fx,R(x),ρ3 (2)

where  ρ1=P(x)  is the estimated principal direction at  x  and  ρ2,ρ3  are two orthogonal vectors in the cross-sectional plane of the vascular structure. In contrast to conventional flux measures for vessel segmentation, where only isotropic radius value is estimated, radius in different sampling directions  di  are estimated adaptively in this work to fit the irregular-shaped vascular boundaries as  fvs  is maximized only if the sampling vectors  hi  fit the vessel edges. Since the radius values can be different from each other, our proposed network is designed to handle the small vessels with irregular and non-circular cross-sections.

Local Contrast Guided Attention.

As shown in Fig. 1, geometric features of large vessels are well-presented due to their high signal-to-noise ratio(SNR). On the other hand, the low contrast of small vessel region makes it hard to distinguish the vascular structures from the background clutters. To address this issue, Fig. 3(a) illustrates our novel spatial attention module to enhance the small vessels based on regional contrast measure. In a vascular image, we assume the pixels inside a vessel have similar intensities even for small structures.

Fig. 3. Local contrast guided attention.

Fig. 3.

(a) shows the proposed LCA and LCE module framework; (b) is an axial slice of a black-blood MRI image that contains multiple low-constrast small vessels (pink box); (c) shows the learnt attention map for (b); (d-e) show the vesselness maps generated by our model without and with the proposed modules for small vascular regions, respectively.

Consider the estimated radius  R(x)  for  xΩ, we define a local contrast  D  as

D(I,x,R(x),s)=12i=1mI(x)-Ix+s×hiI(x)-Ix+s×hj (3)

where  hj=rj(x)dj  and  dj=-di. With  s=1,D  measures the intensity differences between  x  and  x+hi  which locates on vascular boundaries for every sampling direction  di. We can contract or expand the vascular regions along its edges with different  s. To measure the contrast inside and outside the vascular regions, we design two measures  Din  and  Dout  as follows:

Din (I,x)=01D(I,x,R(x),s)ds, Dout (I,x)=12D(I,x,R(x),s)ds (4)

With  s[0,1],Din  computes the intensity differences inside vessels.  Dout  computes outside intensity differences similarly with  s[1,2]. As for the relations between  Din  and  Dout, we consider 3 situations for different  xΩ. (1) For  x  inside vascular structures,  Din<<Dout  should stand since  I(x)-Ix+s×hi0  when  s<1; (2)  Din  and  Dout  would give similar measures if  x  is in the backgrounds, where we consider the local regional intensities are similar. (3) Since equation (3) is evaluated on two opposite directions,  Din  and  Dout  will both give low measures if  x  is an one-sided edge. Based on these three scenarios, we design a novel local contrast measure as

DLC(I,x)=SigmoidDout (I,x)Din (I,x)+ϵ-1 (5)

where  ϵ  is a small constant to prevent the numerical explosion. Since the ratio of  Dout  and  Din  gives large value,  DLC1  for  x  inside a vascular structure, which means the  DLC  measure of small vessels is on a similar scale to large vessels. For the latter two situations,  DLCσ(0)=0.5  will stand since  DinDout. Therefore, the proposed  DLC  measure can enhance the small vascular structures and suppresses backgrounds and edges simultaneously.

Guided by the local contrast map of images, we propose to incorporate a novel spatial attention module in our DL network. Following CBAM [16], we use two pooling layers to abstract the previous features as two feature maps. A local contrast map is computed based on a coarsely estimated  Rˆ, which is generated by previous features. Then, all three maps are sent to a convolutional block to compute the final spatial attention map. By simply extending the LCA module, LCE module scales the previous features with the attention map to refine it. To include low-level contrast information, we insert our LCA and LCE module in the skip connections between encoders and decoders as shown in Fig. 2. As shown in Fig. 3(bd), with the local contrast attention modules, our model can enhance the vesselness measure of the small vessels with low contrasts.

Self-supervised Losses.

Based on (2), we propose our flux-based loss  Lflux  as the negative average vesselness score over the whole image spatial space. This cost function takes advantage of the clear edge separation of flux-based filter and robustness to irregular-shaped vessels due to our adaptive shape-aware flux computation. To ensure the vessel structure’s continuity, we adopt the path continuity loss  Lpath  from [6]. Let us denote  P(x)  as the vascular principal direction at  x, and  P(x+t)  as the principal direction at the location  x+t, which we obtain by walking for  t  from  x  along  P(x).Lpath  is designed to maximize the inner product of these two vectors to encourages  P  to have a consistent and smooth direction along the vessels. Formally,  Lflux  and  Lpath  are computed as

Lflux (P,R)=-1NxΩfvsx,R(x),ρxLpath (P,R)=-Ω02R(x)-(P(x)P(x+t))dtdx (6)

where  N  is the total number of voxels and  R(x)-  is the average radius of  R(x)  so that the length of the walking path is relative to the vessel size at  x. Mean square error loss is applied between the reconstructed image  Iˆ  and original image  I  to make sure the network learns the semantic meaningful features based on the previous success of reconstruction-based self-supervised learning [9]. So, the overall objective of our network is expressed as

L=λ1Lflux+λ2Lpath +λ3MSE(Iˆ,I) (7)

where  λ1,λ2  and  λ3  correspond to the coefficients of each objective.

3. Experimental Results

Datasets.

We used two public datasets and two in-house datasets in our experiments for 3D vessel segmentation. The VESSEL12 [14] contains 23 CT lung images and 3 of them are provided with sparsely annotated vessel and non-vessel locations along 3 axial slices, which we used as a test set. The TubeTk [2] consists of 109 3D brain MRA images. We used the 42 images with ground truths as the test set and the rest of the dataset for training. To better demonstrate the small vessel detection ability of our method, we collected a new brain MRA dataset on a 7T Simens Terra scanner from 31 subjects with voxel size of 0.2 × 0.2 × 0.4mm. For this dataset, we sparsely annotated 2200 small vessels(within 3 × 3 grids) locations on axial slices of 7 images according to the definition of the small vessel in [12] (Fig. 4(ij)). The remaining 24 images constitute the training set. In addition, a black-blood MRI dataset was collected using a Siemens 3T Prisma scanner with voxel size of 0.5 × 0.5 × 0.5mm and separated by left and right hemispheres for a total of 56 image volumes. Dense manual segmentation of the lenticulostriate arteries(LSAs) was carefully performed by two experts. The dataset was divided into a training set with 21 subjects (42 volumes) and a test set with 7 subjects (14 volumes).

Fig. 4. Qualitative comparison.

Fig. 4.

(a-h) is a comparison for TubeTK dataset. Green: true positives; red: false negatives. (i-j) is an example of our manual label for a 7T MRA image patch; (k-l) is small vessel detection comparison for the patch in (i-j).

Implementation Details.

We performed the vessel segmentation tasks for each dataset to evaluate our model’s performance. For comparison, we selected 4 conventional filters, including the Frangi, Sato, Meijering and OOF filter and a flow-based DL method [6]. For fair comparison, the same pre-processing procedures were applied for all methods using FreeSurfer 6.0 [13] and the SimpleITK toolbox [1]. In addition, for the black-blood LSA dataset, the image intensities were inversed to make sure the vascular voxels are brighter than backgrounds.

To tune the network’s hyper-parameters, we used 15% of the training data as the validation set. Finally, we set  m=128,λ1=5  and  λ2=λ3=1  for our model. For both DL models, patch size is set to 64 × 64 × 64 and the networks are trained for 100 epochs. All experiments were conducted with Pytorch on one NVIDIA A5000 GPU with the Adam optimizer and a learning rate of 0.001.

For all methods, the output is a vessel enhanced image. The enhanced image is binarized through a hard threshold. We thus found the best threshold by optimizing the metrics based on the validation sets and then applied the final threshold to test sets. We reported five metrics in Table 1, namely, the area-under-curve(AUC), accuracy, sensitivity, specificity and dice score. For VESSEL12 and 7T MRA dataset, we treated the segmentation tasks as classification problems and dropped the dice score metric since they do not have dense labels.

Table 1.

Vessel Segmentation Performance Comparison on Different Datasets.

VESSEL12 7T MRA
Method Acc Sens Spec AUC Acc Sens Spec AUC
Sato 0.8299 0.7580 0.8636 0.9102 0.8716 0.9705 0.7347 0.8933
Meijering 0.9184 0.8932 0.9301 0.9720 0.7060 0.9377 0.3854 0.6547
Frangi 0.9672 0.9584 0.9669 0.9738 0.8130 0.7008 0.9683 0.8104
OOF 0.9331 0.9324 0.9334 0.9659 0.8416 0.8074 0.8889 0.8623
Flow-Based 0.9553 0.9856 0.9213 0.9871 0.9247 0.9432 0.9261 0.9485
Ours 0.9761 0.9809 0.9793 0.9937 0.9824 0.9787 0.9875 0.9972
TubeTK Black-blood LSA
Method Dice Sens Spec AUC Dice Sens Spec AUC
Sato 0.3166 0.3933 0.9964 0.9262 0.0596 0.7111 0.92251 0.9432
Meijering 0.1573 0.1604 0.9970 0.9023 0.1669 0.6512 0.9797 0.9459
Frangi 0.3569 0.3641 0.9977 0.9319 0.2667 0.6221 0.9905 0.9281
OOF 0.3877 0.3874 0.9980 0.9426 0.3368 0.6454 0.9886 0.9584
Flow-Based 0.4003 0.4211 0.9978 0.9693 0.4608 0.6673 0.9927 0.9732
Ours 0.5487 0.5061 0.9987 0.9878 0.5121 0.6979 0.9983 0.9624

Results and Discussion.

We can clearly observe from Table 1 that our model outperforms all other methods by a significant margin for all datasets. Since the flow-based DL outperformed the conventional filters, we will use it for visual comparisons with our method. From Fig. 4(ad), we can observe that our model generates more accurate segmentation with more true positives and less false negatives as compared to the flow-based DL method [6]. These results show that our model can better detect general vessel structures including large vessels.

Small Vessel Segmentation.

From Fig. 4(eh), we can see the main improvement of our model over the flow-based DL method occurs often at areas of small vessels, where our results have much lower false negatives for small vessels missed by the other. The results on the 7T MRA data in Table 1 further demonstrate that our model achieved better performance on the small vessel detection task. Fig. 4(kl) provides a visual comparison of the performance of small vessel detection by our method and the flow-based DL, which shows that our model can more successfully segment the vessels with very small radius and low contrasts. In addition, Fig. 5 shows that our estimated principal directions are better aligned with the vessel directions over the flow-based DL. Furthermore, our estimated radius can better capture the vascular shapes by fitting the asymmetric vessel boundaries. Thus, comparing with the flow-based DL, our model can produce sharper and clearer vesselness maps(Fig. 5).

Fig. 5. Vessel Estimation Comparison.

Fig. 5.

Comparison between our model and flow-based model on vessel direction and shape estimations.

Ablation Study.

To investigate the contributions of each proposed module in our approach, we performed ablation studies by training the network with each proposed module using the 7T MRA dataset. The results are shown in Table 2, where we used the flow-based DL method as the baseline for comparison. Note that CS stands for circular sampling and AS stands for the proposed shape-aware adaptive sampling. As can be seen, with all the proposed modules, we increase the AUC metric of the baseline by 4.87% on the test set, showing significant improvements with only minimal extra computational cost. Furthermore, we observed that all the proposed modules improve both the sensitivity and specificity since complicated small vessels can be better modeled by these components.

Table 2.

Ablation Analysis Results with the 7T high-resolution MRA Dataset.

Methods Acc Sens Spec AUC # of Params
Flow-based [6] 0.9247 0.9432 0.9261 0.9485 32.63M
Ours (CS only) 0.9344 0.9478 0.9329 0.9554 32.63M
Ours (AS only) 0.9578 0.9669 0.9333 0.9662 32.64M
Ours (LCA only) 0.9747 0.9699 0.9732 0.9865 32.65M
Ours (AS + LCA) 0.9824 0.9787 0.9875 0.9972 32.66M

4. Conclusions

In this paper, we proposed a self-supervised network for the detection of small vessels from 3D imaging data. Our method is designed to address existing challenges in small vessel detection arising from their irregular appearance in relatively limited resolution and low-contrast conditions. In comparison to previous methods, we demonstrated that our method is able to achieve superior performance on small vessel detection. For future work, we will also apply it to various clinical datasets to examine its power for CSVD detection in brain images.

Acknowledgments

This work is supported by the National Institute of Health (NIH) under grants R01EB022744, RF1AG077578, RF1AG056573, RF1AG064584, RF1AG072490, R21AG064776, P41EB015922, U19AG078109.

References

  • 1.Beare R, Lowekamp B, Yaniv Z: Image segmentation, registration and characterization in r with simpleitk. Journal of statistical software 86 (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Bullitt E, Zeng D, Gerig G, Aylward S, Joshi S, Smith JK, Lin W, Ewend MG: Vessel tortuosity and brain tumor malignancy: a blinded study1. Academic radiology 12(10), 1232–1240 (2005) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cuadrado-Godia E, Dwivedi P, Sharma S, Santiago AO, Gonzalez JR, Bal-cells M, Laird J, Turk M, Suri HS, Nicolaides A, et al. : Cerebral small vessel disease: a review focusing on pathophysiology, biomarkers, and machine learning strategies. Journal of stroke 20(3), 302 (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Frangi AF, Niessen WJ, Vincken KL, Viergever MA: Multiscale vessel enhancement filtering. In: Wells WM, Colchester A, Delp S (eds.) Medical Image Computing and Computer-Assisted Intervention — MICCAI’98. pp. 130–137. Springer Berlin Heidelberg, Berlin, Heidelberg: (1998) [Google Scholar]
  • 5.Gorelick PB, Scuteri A, Black SE, DeCarli C, Greenberg SM, Iadecola C, Launer LJ, Laurent S, Lopez OL, Nyenhuis D, Petersen RC, Schneider JA, Tzourio C, Arnett DK, Bennett DA, Chui HC, Higashida RT, Lindquist R, Nilsson PM, Roman GC, Sellke FW, Seshadri S: Vascular contributions to cognitive impairment and dementia. Stroke 42(9), 2672–2713 (2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Jena R, Singla S, Batmanghelich K: Self-supervised vessel enhancement using flow-based consistencies. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 242–251. Springer; (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kraff O, Quick HH: 7t: Physics, safety, and potential clinical applications. Journal of Magnetic Resonance Imaging 46(6), 1573–1589 (2017) [DOI] [PubMed] [Google Scholar]
  • 8.Law MW, Chung A: Three dimensional curvilinear structure detection using optimally oriented flux. In: European conference on computer vision. pp. 368–382. Springer; (2008) [Google Scholar]
  • 9.Li X, Liu S, Kim K, Mello SD, Jampani V, Yang MH, Kautz J: Self-supervised single-view 3d reconstruction via semantic consistency. In: European Conference on Computer Vision. pp. 677–693. Springer; (2020) [Google Scholar]
  • 10.Ma SJ, Sarabi MS, Yan L, Shao X, Chen Y, Yang Q, Jann K, Toga AW, Shi Y, Wang DJ: Characterization of lenticulostriate arteries with high resolution black-blood t1-weighted turbo spin echo with variable flip angles at 3 and 7 tesla. Neuroimage 199, 184–193 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Meijering E, Jacob M, Sarria JC, Steiner P, Hirling H, Unser e.M.: Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry Part A: the journal of the International Society for Analytical Cytology 58(2), 167–176 (2004) [DOI] [PubMed] [Google Scholar]
  • 12.Pantoni L, Sarti C, Alafuzoff I, Jellinger K, Munoz DG, Ogata J, Palumbo V: Postmortem examination of vascular lesions in cognitive impairment: a survey among neuropathological services. Stroke 37(4), 1005–1009 (2006) [DOI] [PubMed] [Google Scholar]
  • 13.Reuter M, Rosas HD, Fischl B: Highly accurate inverse consistent registration: A robust approach. NeuroImage 53(4), 1181–1196 (2010) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Rudyanto RD, Kerkstra S, Van Rikxoort EM, Fetita C, Brillet PY, Lefevre C, Xue W, Zhu X, Liang J, Öksüz I, et al. : Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the vessel12 study. Medical image analysis 18(7), 1217–1232 (2014) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Sato Y, Nakajima S, Shiraga N, Atsumi H, Yoshida S, Koller T, Gerig G, Kikinis R: Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Medical image analysis 2(2), 143–168 (1998) [DOI] [PubMed] [Google Scholar]
  • 16.Woo S, Park J, Lee JY, Kweon IS: Cbam: Convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV). pp. 3–19 (2018) [Google Scholar]

RESOURCES