Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Aug 16.
Published in final edited form as: Proc IEEE Int Symp Biomed Imaging. 2021 May 25;2021:1416–1419. doi: 10.1109/ISBI48211.2021.9434018

INTRACRANIAL VESSEL WALL SEGMENTATION FOR ATHEROSCLEROTIC PLAQUE QUANTIFICATION

Hanyue Zhou 1, Jiayu Xiao 2, Zhaoyang Fan 1,2,4, Dan Ruan 1,3
PMCID: PMC8366273  NIHMSID: NIHMS1732664  PMID: 34405036

Abstract

Intracranial vessel wall segmentation is critical for the quantitative assessment of intracranial atherosclerosis based on magnetic resonance vessel wall imaging. This work further improves on a previous 2D deep learning segmentation network by the utilization of 1) a 2.5D structure to balance network complexity and regularizing geometry continuity; 2) a UNET++ model to achieve structure adaptation; 3) an additional approximated Hausdorff distance (HD) loss into the objective to enhance geometry conformality; and 4) landing in a commonly used morphological measure of plaque burden - the normalized wall index (NWI) - to match the clinical endpoint. The modified network achieved Dice similarity coefficient of 0.9172 ± 0.0598 and 0.7833 ± 0.0867, HD of 0.3252 ± 0.5071 mm and 0.4914 ± 0.5743 mm, mean surface distance of 0.0940 ± 0.0781 mm and 0.1408 ± 0.0917 mm for the lumen and vessel wall, respectively. These results compare favorably to those obtained by the original 2D UNET on all segmentation metrics. Additionally, the proposed segmentation network reduced the mean absolute error in NWI from 0.0732 ± 0.0294 to 0.0725 ± 0.0333.

Index Terms—: Vessel wall segmentation, deep neural networks, Hausdorff distance, UNET++

1. INTRODUCTION

Magnetic resonance (MR) vessel wall imaging (VWI) is an emerging non-invasive technology highly useful for evaluating intracranial atherosclerotic disease (ICAD), thanks to its high spatial resolution and dark-blood contrast [1], [2]. On VWI, once the lumen and vessel wall (VW) are contoured, several morphological quantitative metrics can be derived, such as normalized wall index (NWI), which is a commonly used measure of atherosclerotic plaque burden and has demonstrated important clinical value [3]–[5].

Manual segmentation of the lumen and VW is very time-consuming and is subject to high intra- and inter-observer variations. This has motivated studies on automatic vessel wall segmentation. Chen et al. proposed a workflow of automatic vessel detection and segmentation by two separate convolutional neural networks (CNNs) using MR images of the extracranial carotid and popliteal arteries [6]. Wu et al. proposed a segmentation combined with diagnosis workflow for the extracranial carotid arteries [7]. However, there has been scarce literature focused on the intracranial arteries that are of smaller size and may present considerable challenges.

A 2D UNET method has been proposed for intracranial vessel segmentation [8]. In the present study, we further enhanced the segmentation model to 2.5D to strengthen geometry continuity across adjacent MR slices. We used a UNET++ model structure and adopted a loss function composed of both a soft Dice coefficient loss and a distance-transform approximated Hausdorff distance (HD) loss [9], [10]. The UNET++ has dense connections among different semantic scales in the network, which offers structure adaptation. The addition of HD loss encourages the geometric conformality of the segmentation to the manual labels. The modified segmentation network yielded better performances across metrics compared to the benchmark 2D UNET model.

2. MATERIALS AND METHODS

2.1. Dataset

The data used in this study were from T1-weighted MR VWI of 30 patients with diagnosed ICAD. Images were acquired with a whole-brain MR VWI protocol, using a 3-Tesla whole-body system (MAGNETOM Prisma; Siemens Healthcare, Erlangen, Germany) equipped with a 64-channel head/neck coil (Siemens Healthcare) [11], [12]. The acquired spatial resolution was isotropic 0.55 mm. In each patient, four vessel segments (i.e., the intracranial internal carotid artery, the middle cerebral artery, the intracranial vertebral artery, and the basilar artery) were included. From each of them, 30 contiguous 2D cross-sectional slices with 0.55 mm slice thickness and 0.1 mm in-plane resolution were generated using 3D Slicer (version 4.11.0) [13]. The ground truth lumen and VW were labeled by an experienced radiologist using ITK-SNAP (version 3.8.0) [14].

2.2. UNET++

UNET++ was proposed for 1) efficiently training an ensemble of UNETs with multiple depths, and 2) establishing dense and more effective skip connections among varying semantic scales of the network [9]. UNET++ model structure is illustrated in Fig. 1. We utilized the deep supervision option in our design, where each of the D sub-decoders, i.e., X0, i, i ∈ [1, D] outputs a prediction and contributes to the total loss. We minimized the soft Dice coefficient (DC) loss* for each sub-decoder:

Ldc(p,yi)=11Nc=1Cn=1N2pn,cyn,cipn,c2+yn,c2i, (1)

where yn,c and pn,c is the prediction of pixel n for class c and the ground truth, respectively; N is the total number of pixels in a batch, and C is the number of classes.

Fig. 1.

Fig. 1.

UNET++ structure: each node is a convolution block, downward arrows are down-sampling, upward arrows are up-sampling, and dot arrows are skip connections

Specifically, we assigned equal weights to the loss from each sub-decoder leading to the overall soft DC loss function:

LDC=i=1DLdc(p,yi). (2)

2.3. Hausdorff distance loss

HD is computed between the predicted segmentation boundary and the ground truth, which indicates the biggest point-wise matching discrepancy. The bidirectional HD between the ground truth set P and the predicted set Y is:

HD(P,Y)=max(hd(P,Y),hd(Y,P)), (3)

where

hd(P,Y)=maxpPminyYpy2, (4)
hd(Y,P)=maxyYminpPpy2. (5)

Karimi et al. proposed three methods for approximating HD and incorporated the estimated HD loss in the overall loss function, which enabled a direct minimization of the HD [10]. We adopted the HD loss approximated by the distance transform (DT), due to its effective implementation. For a 2D binary image X[i,j], with 0 representing the background and 1 the foreground, the DT of X is:

DTX[i,j]=mink,ld([i,j],[k,l]). (6)

where [k, l] is the indices of the foreground pixels, and d here denotes the Euclidean distance in our case. The HD loss is:

LHD(p,y)=1Ni,c,n(pn,cyn,ci)2(dtpn,c2+dtyn,ci2). (7)

Here c is the foreground classes, and dtp and dty’ denote the DT of the ground truth p and the predicted binary segmentation mask y’, respectively.

2.4. Overall loss function

Combining both the soft DC loss and the HD loss with a hyperparameter λ, we formulated our overall loss function as:

Lall=λLDC+LHD. (8)

3. EXPERIMENT

For the network development, we split the 30 patient cohort into 24:3:3 patients for training, validation, and testing, respectively. The training data were augmented by randomly flipping vertically and horizontally and isotropic and anisotropic zooming-in images. The 2.5D network took three consecutive 128 × 128 slices as the triple-input to the network to estimate the middle slice label. The classification contained three classes: background, the lumen, and the VW. The network specifically classified background, lumen, and the outer vessel boundary with a sigmoid activation, and subtracted lumen from the outer boundary as the VW. The soft DC loss of the VW was also added to the total soft DC loss [7]. A voting scheme was adopted for UNET++ models, which weighted-averaged the predictions among all sub-decoders. We assigned a weighting of [0, 0.1, 0.8, 0.1] to node X0,i , i ∈ [1, 4], respectively, based on the validation performance.

Similar to the model structure in our previous study, the basic convolution block consists of a 2D convolution layer, a batch-normalization layer, a PReLU layer, and another 2D convolution layer [8]. All UNET and UNET++ models have a depth of four. The base number of channels is 32, and is doubled or halved at the down- or up-sampling processes in both UNET and UNET++. The hyperparameter λ in (8) was tuned with respect to the validating DC loss as 1. We trained the model for 100 epochs with an Adam optimizer and a learning rate of 1e-5. A batch size of 16 was used for all models.

The performance of our model was evaluated on the testing set with the following four criteria, i.e., 1) Dice similarity coefficient (DSC), 2) 95 percentile HD (HD_95), 3) mean surface distance (MSD) from the prediction to the ground truth, and 4) the mean absolute error of NWI (MAE_NWI). While the DSC measures the integrative area discrepancy, the HD and MSD are good indicators for discrepancies at the segmentation boundaries. NWI is a clinical morphological feature defined as: |VW||VWLumen|, where |.| here indicates area. NWI ranges from 0 to 1, with a higher value indicating a heavier plaque burden.

4. RESULTS

Table I. reports the segment-wise mean and standard deviation of the testing data. Fig. 2. shows typical segmentation performance with two examples. Fig. 3. presents the comparison of NWI curves by each model, and a 3D illustration of an example vessel segment of the ground truth and by the proposed model. It can be observed that the proposed 2.5D UNET++ model with DC + HD loss generally achieves the best performance across various quantitative measures, and also visually best resembles the ground truth.

Table I.

Quantitative Comparison of models

Model Loss Function Class Metric
DSC HD_95 (mm) MSD (mm) MAE_NWI
2D UNET DC Lumen 0.9163 ± 0.0522 0.3467 ± 0.5173 0.1034 ± 0.0787 0.0732 ± 0.0294
Vessel Wall 0.7452 ± 0.1046 0.6146 ± 0.7147 0.1764 ± 0.1270
2.5D UNET DC Lumen 0.9080 ± 0.0641 0.3641 ± 0.5674 0.1047 ± 0.0765 0.0975 ± 0.0425
Vessel Wall 0.7521 ± 0.1006 0.5360 ± 0.5686 0.1657 ± 0.0971
2.5D UNET DC + HD Lumen 0.9039 ± 0.0665 0.3339 ± 0.4772 0.0998 ± 0.0444 0.0836 ± 0.0422
Vessel Wall 0.7615 ± 0.0969 0.4784 ± 0.4994 0.1423 ± 0.0640
2.5D UNET++ DC Lumen 0.9116 ± 0.0723 0.3771 ± 0.6205 0.1061 ± 0.1034 0.0811 ± 0.0404
Vessel Wall 0.7758 ± 0.0957 0.5281 ± 0.5886 0.1494 ± 0.0882
2.5D UNET++ DC + HD Lumen 0.9172 ± 0.0598 0.3252 ± 0.5071 0.0940 ± 0.0781 0.0725 ± 0.0333
Vessel Wall 0.7833 ± 0.0867 0.4914 ± 0.5743 0.1408 ± 0.0917

Fig. 2.

Fig. 2.

Visualization of model performance comparison: dashed block (a) and (b) are two 3-slice examples from two vessel segments. The 1st column is the original consecutive MRI slices (s1, s2, and s3), the 2nd to the last columns show the ground truth and estimated segmentation from each model of the corresponding MRI slice, respectively. Black is the background, grey is the VW, and white is the lumen.

Fig. 3.

Fig. 3.

Comparisons of NWI curves by each model (a); the ground truth segmentation (b); and the predicted segmentation by the proposed model (c) in 3D illustration of an example vessel segment (30 consecutive slices). Inner yellow is the lumen, and outer grey is the VW.

5. DISCUSSIONS

5.1. UNET++

UNET++ structure achieves a better performance than the UNET, with dense- and skip-connections offering more flexibility in the scale adaptation. Moreover, a weighted sum of the outputs from the deep supervisions further enhances the performance.

5.2. HD loss

The addition of the HD loss term further helps reshape the class boundaries to better conform to the ground truth segmentation. It also appears to be a good surrogate objective to boost the NWI estimate. The selection of the trade-off hyperparameter λ needs to be handled with care, as the scale of the DC loss and HD loss is different. For simplicity, we assigned a fixed λ instead of a ratio between HD and DC loss, as suggested by Karimi et al., and managed to maintain the desired advantage [10].

5.3. NWI oscillations

The NWI oscillations shown in Fig. 3. (a) and the zigzags of the manual labeled vessel considered as ground truth segmentation in cross-sectional slices in Fig. 3. (b) suggest possible room for improvement in accuracy and consistency of the ground truth labels. The artificial oscillations may explain the moderate improvement of the 2.5D model over our previous 2D model, as the benefit of longitudinal smoothness from the former may not be properly represented in the current manual labels. This observation shows that another independent label or review is warranted and a spatial filter may be applied to improve the quality of the true labels.

5.4. NWI estimation discrepancy

The signed error between the prediction and the ground truth NWI corresponds to a 95% confidence interval of [−0.03, 0.18], as shown in Fig. 4. (left). A one-sided paired t-test with p = 0.05 rejects the null hypothesis 11 out of the 12 testing vessel segments. The systematic over-estimation of the NWI is more prominent for relatively normal vessel segments, as shown in Fig. 4. (right). This was caused by a bias of the NWI distribution in the training set, as cases with extremely big and small NWI values are rarely found and incorporated in the training set. This may be alleviated by incorporating more “rare cases” in the training set or performing data augmentation. If needed, one may consider a bias correction scheme on the NWI during post-processing.

Fig. 4.

Fig. 4.

Histogram of signed error (left) and signed error distribution (right)

6. CONCLUSION

A 2.5D-UNET++ structure with a loss function composed of both soft Dice coefficient loss and Hausdorff distance loss yields universal improvements in various metrics for intracranial vessel segmentation. Further assessment with normalized wall index indicates that quantitative clinical endpoints may misalign with the common segmentation metrics despite their close association. Future work includes manual contour quality assurance and potential calibration schemes to use NWI quantitatively.

8. CONFLICTS OF INTEREST

This work is supported by NIH/NHLBI R01 HL147355.

Footnotes

7. COMPLIANCE WITH ETHICAL STANDARDS

Under IRB approval, this research was performed retrospectively using scans from human subjects, where informed consent was waivered.

*

We denote soft DC loss to differentiate between the soft DC loss vs. the hard DSC measure.

9. REFERENCES

  • [1].Mandell DM et al. , “Intracranial vessel wall MRI: Principles and expert consensus recommendations of the American society of neuroradiology,” American Journal of Neuroradiology, vol. 38, no. 2. pp. 218–229, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Song JW, Pavlou A, Xiao J, Kasner SE, Fan Z, and Messé SR, “Vessel Wall Magnetic Resonance Imaging Biomarkers of Symptomatic Intracranial Atherosclerosis: A Meta-Analysis,” Stroke, vol. 52, no. 1, pp. 193–202, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Xiao J et al. , “Acute ischemic stroke versus transient ischemic attack: Differential plaque morphological features in symptomatic intracranial atherosclerotic lesions,” Atherosclerosis, vol. 319, January. 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Qiao Y et al. , “MR imaging measures of intracranial atherosclerosis in a population-based study,” Radiology, vol. 280, no. 3, pp. 860–868, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Wu F et al. , “Differential features of culprit intracranial atherosclerotic lesions: A whole-brain vessel wall imaging study in patients with acute ischemic stroke,” J. Am. Heart Assoc, vol. 7, no. 15, August. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Chen L et al. , “Automated Artery Localization and Vessel Wall Segmentation of Magnetic Resonance Vessel Wall Images using Tracklet Refinement and Polar Conversion,” 2019. [DOI] [PMC free article] [PubMed]
  • [7].Wu J et al. , “Deep morphology aided diagnosis network for segmentation of carotid artery vessel wall and diagnosis of carotid atherosclerosis on black-blood vessel wall MRI,” Med. Phys, vol. 46, no. 12, pp. 5544–5561, 2019. [DOI] [PubMed] [Google Scholar]
  • [8].Shi F et al. , “Intracranial Vessel Wall Segmentation Using Convolutional Neural Networks,” IEEE Trans. Biomed. Eng, vol. 66, no. 10, pp. 2840–2847, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Zhou Z, Siddiquee MMR, Tajbakhsh N, and Liang J, “UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation,” IEEE Trans. Med. Imaging, pp. 1–1, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Karimi D and Salcudean SE, “Reducing the Hausdorff Distance in Medical Image Segmentation with Convolutional Neural Networks,” IEEE Trans. Med. Imaging, vol. 39, no. 2, pp. 499–513, 2020. [DOI] [PubMed] [Google Scholar]
  • [11].Yang Q et al. , “Whole-brain vessel wall MRI: A parameter tune-up solution to improve the scan efficiency of three-dimensional variable flip-angle turbo spin-echo,” J. Magn. Reson. Imaging, vol. 46, no. 3, pp. 751–757, September. 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Fan Z et al. , “Whole-brain intracranial vessel wall imaging at 3 Tesla using cerebrospinal fluid-attenuated T1-weighted 3D turbo spin echo,” Magn. Reson. Med, vol. 77, no. 3, pp. 1142–1150, March. 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Kikinis R, Pieper SD, and Vosburgh KG, “3D Slicer: A Platform for Subject-Specific Image Analysis, Visualization, and Clinical Support,” in Intraoperative Imaging and Image-Guided Therapy, Springer; New York, 2014, pp. 277–289. [Google Scholar]
  • [14].Yushkevich PA et al. , “User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability,” Neuroimage, vol. 31, no. 3, pp. 1116–1128, July. 2006. [DOI] [PubMed] [Google Scholar]

RESOURCES