Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Feb 24.
Published in final edited form as: Annu Int Conf IEEE Eng Med Biol Soc. 2016 Aug;2016:5913–5916. doi: 10.1109/EMBC.2016.7592074

Multiquadric Spline-Based Interactive Segmentation of Vascular Networks

Sachin Meena 1, V B Surya Prasath 1, Yasmin M Kassim 1, Richard J Maude 6,7,8, Olga V Glinskii 2,3, Vladislav V Glinsky 2,4, Virginia H Huxley 3,5, Kannappan Palaniappan 1
PMCID: PMC5324779  NIHMSID: NIHMS833674  PMID: 28261011

Abstract

Commonly used drawing tools for interactive image segmentation and labeling include active contours or boundaries, scribbles, rectangles and other shapes. Thin vessel shapes in images of vascular networks are difficult to segment using automatic or interactive methods. This paper introduces the novel use of a sparse set of user-defined seed points (supervised labels) for precisely, quickly and robustly segmenting complex biomedical images. A multiquadric spline-based binary classifier is proposed as a unique approach for interactive segmentation using as features color values and the location of seed points. Epifluorescence imagery of the dura mater microvasculature are difficult to segment for quantitative applications due to challenging tissue preparation, imaging conditions, and thin, faint structures. Experimental results based on twenty epifluorescence images is used to illustrate the benefits of using a set of seed points to obtain fast and accurate interactive segmentation compared to four interactive and automatic segmentation approaches.

I. Introduction

Over the past two decades interactive methods for clinical and biomedical image segmentation have been investigated since the pioneering work of Live-Wire, Live-Lane [1] and Intelligent Scissors [2]. Fully automatic image segmentation is essential for quantitative analysis but remains an unsolved problem, so user driven interactive methods continue to be a powerful alternative when extremely precise segmentation is required. However, manual methods although routinely used are tedious, time-consuming, expensive, inconsistent between experts and error prone. In semi-supervised interactive segmentation the goal is for the user to provide a small amount of partial information or hints for an automatic algorithm to use in order to produce accurate boundaries suitable for the user. The coupled interaction between the user provided input and the semi-supervised segmentation algorithm should be minimal and robust.

Commonly used drawing tools for interactive segmentation interfaces include active contour or boundary drawing, scribbles to identify foreground and background regions, and rectangles to outline the object of interest. In the case of complex anatomy, tissue structures and imaging conditions, even such interactive segmentation tools would still require a significant amount of user intervention to achieve accurate segmentation results. An example of a difficult case is the vessel network segmentation for the epifluorescence image of dura mater microvasculature shown in Figure 1. Since the vessel structures are thin, manually drawing contours, scribbles or rectangles using current interactive segmentation tools, is tedious, slow and often inaccurate, creating a need for better semi-automated delineation tools [3], [4]; the latter includes survey of software tools for reconstructing curvilinear structures. Our motivation is to develop quantitative tools to study the effects of hormone therapy on angiogenesis and vascular remodeling of capillaries [3], [4].

Fig. 1.

Fig. 1

The proposed interactive MQ segmentation result using 29 seeds produces accurate results. (a) Dura mater vessel image 012706-ERBKO-05 (contrast enhanced for visualization), (b) ground-truth (GT) vessel segmentations provided by an expert physiologist. (c) result of MQ segmentation without pre- or post-processing (Dice 0.8872).

In this paper we propose a new paradigm of using only labeled seed points, that do not need to carefully follow centerlines or boundaries for user assisted segmentation of difficult biomedical imagery with curvilinear structures. Seed points are much easier and more intuitive for the user to provide labeled inputs and requires minimal interaction compared to contours, lines, rectangles or other shapes. We introduced point-based segmentation using elastic body splines and Gaussian elastic body splines in [5]–[7] and showed their advantages for segmenting natural imagery using an average of less than ten seed points. In this paper, we focus on characterizing the utility of using interactive seed points for segmenting thin vessels in epifluorescence imagery and extend the basis set to multiquadric splines. Figure 1 shows a case of using 9 foreground and 20 background seed points to produce a highly accurate segmentation result with a Dice value of 88.7 percent based on scoring the overlap with the manual ground-truth.

Many techniques exist for automatic segmentation of vessels, especially in angiogram imaging for characterizing neurological, retinal, and heart regions [8]. The majority of these are fully automatic segmentation methods which do not require any user intervention. Interactive segmentation methods specialized for thin curvilinear structures provide flexibility to the user in defining and refining object boundaries. Interactive segmentation based on techniques such as Graph-Cut [9] and Random Walker [10] are popular approaches where the user provide scribbles on the object of interest and the background. In Live-Wire [1] and Intelligent Scissors [2] the user carefully draws along the object boundary edges in a semi-automated manner. Being edge-based methods they perform poorly in the presence of weak edges and noise. Recently in [5] and [6] we proposed using sparse set of seed points to perform a user driven image segmentation. Sparse inputs leads to a very fast image segmentation process.

In this work we utilize multiquadric (MQ) splines for point-based interactive segmentation of thin curvilinear structures in epifluorescence imagery. Unlike other semiautomated tools [4] our interactive segmentation framework requires only a few labeled seed points and can obtain vessel segmentations under inhomogeneous background, partially diffused vessels and uneven contrast. We frame the interactive image segmentation task as a semi-supervised interpolation problem which uses MQ as the basis function in the governing interpolation equation. This work is part of a system that builds upon our previous work on multi-focus fusion and edge preserving smoothing [11]–[13].

The rest of the paper is organized as follows. Section II introduces our multiquadric splines-based interactive image segmentation framework. Section III provides detailed experimental results on epifluorescence imagery of mice dura mater capillaries and quantitative performance comparison with other interactive and global thresholding methods followed by conclusions.

II. Multiquadric Splines for Interactive Image Segmentation

A. Interactive segmentation with splines

Epifluorescence imagery of tissues have high variability in both arteriole and venule vessel boundaries. We consider semi-automated segmentation framework based on user defined seed points to obtain reliable segmentations. For this purpose we learn the parameters for a binary classifier function df (x⃗) of the following form from the user supplied inputs,

df(x)=i=1maixi+b+j=0Ng(x-xj)cj. (1)

such that,

j=0Ncjxj,k=0k=1,,5 (2)

where x⃗ = [x1, x2, . . . , x5]T and xi is the ith feature, m is the number of features (m = 5 in our case), cj is the spline coefficient and ai and b are coefficients of the linear term, j is the feature index, N is the total of foreground and background seed points and g is the multiquadric spline function given by,

g(x-xj)=d2(x,xj)+R2, (3)

where d is the distance between the data points x⃗ and x⃗j. In our case d2 = ||x⃗ix⃗j||2 and R2 is a user defined positive constant. This function is similar to the extensor function for interpolation [14]. Equations in (1) constitute a linear system of equations and the constraint in (2) ensures that the system of equations has a unique solution. In equation (1) the first term is the linear part of the interpolation function while the second term represents the non-linear part of the interpolation function. For the task of interactive segmentation df (x⃗) is learned from the user labeled seed points such that df (x⃗)) = +1 for the foreground pixels and df (x⃗) = −1 for the background pixels.

Once the MQ interpolation spline function (1) is learned, we can threshold the interpolation function df (x⃗) at zero and assign label (x⃗),

(x)={foreground,ifdf(x)0background,ifdf(x)<0. (4)

For the interactive image segmentation task we use five image features at each pixel, namely three color features (Red, Green, Blue) converted to the LAB color space and two coordinate locations of each seed pixel. For the interpolation task the use of coordinate location as a feature is critical as it effects the influence of seed points and provides geometric spatial regularization.

B. Multiquadric splines

In this work we use multiquadric (MQ) splines as they provide robust interpolation of data points in high dimensional spaces. Let w⃗ be the vector of all the MQ coefficients given as,

w=[cFTcBTaTb] (5)

where c⃗F and c⃗B are coefficients corresponding to the Nf, foreground and Nb background seed pixels.

cF=[c1cNf]T,cB=[c1cNb]T,a=[a1a5]T

The MQ coefficients are estimated by solving the matrix equation,

w=L-1Y (6)

where,

L=[KPPTO],K=[GFFGFBGBFGBB] (7)
GFF(r)=[G11(r11)G1f(r1f)GNf1(r11)GNff(rNff)] (8)

with rij = d(x⃗ix⃗j) where d is the distance between the feature vectors x⃗i and x⃗j, d2 = ||x⃗ix⃗j||2. GFF is the matrix of MQ (3) spline functions defined only over the foreground pixels.

G(i,j)=g(xi-xj)=d2(xi,xj)+R2

Similarly GFB, GBB and GBF are also defined. We have,

P=[PFIFPBIB], (9)

where the identity matrices are given by,

IF=[I1INf],IB=[I1INb], (10)

with I is 5 × 5 identity matrix, and

PF=[x11x15xNf1xNf5] (11)

where xij is the ith foreground pixel for jth feature. I⃗F and I⃗B are vectors of 1. PB is similarly defined. The vector Y⃗,

Y=[YFTYBTOT] (12)
YF=[11]T,YB=[-1-1]T,O=[00]T

Y⃗F = [1 . . . 1]T, Y⃗B = [−1 . . . −1]T, O⃗ = [0 . . . 0]T consists of Y⃗F with values +1 for the foreground seed points and Y⃗B with values −1 for the background seed points, O⃗ is a vector of zeros. We refer to [5] and [6] for more details for solving the above matrix problem (6).

III. Experimental Results

The experiments were performed using high resolution epifluorescence images of mice dura mater acquired using a video microscopy system (Laborlux 8 microscope from Leitz Wetzlar, Germany) equipped with 75 watt xenon lamp and QICAM high performance digital CCD camera (Quantitative Imaging Corporation, Burnaby, Canada) with 0.56 micron per pixel resolution. Our dataset consists of 10 wild-type (WT), and 10 knock-out (KO) mice dura mater epifluorescence images stained using Alexa Fluor 488-conjugated soybean agglutinin (SBA lectin). We collected manually drawn ground-truth (GT) with supervision from an expert physiologist for benchmarking the segmentation methods. Note that the ground truth for these images reported in the Table I has since been updated. The overall biological motivation is to quantify the role of estrogen receptors (ERβ) in microvasculature remodeling in WT and KO mice.

Table I.

Comparison of different segmentation methods on 20 (10-knock-out (012706-ERbKO) + 10-wild-type (012606-ERbWT)) mice dura mater epifluorescence images. We show the Dice values for our proposed multiquadric spline interactive segmentation method, compared with automatic thresholding as well different interactive segmentation approaches. Higher Dice values indicate better results compared to manual gold-standard ground-truth segmentations.

No. Type Niblack [11] Otsu Random Walker [10] Graph Cut [9] MQ
1 KO-01 0.6324 0.3038 0.2177 0.7301 0.7092
2 KO-03 0.5456 0.6813 0.3404 0.8528 0.8031
3 KO-04 0.6196 0.6282 0.3176 0.8023 0.8436
4 KO-05 0.7970 0.5780 0.5145 0.7097 0.8367
5 KO-06 0.7452 0.5999 0.6331 0.8080 0.8641
6 KO-08 0.6875 0.6406 0.5322 0.8154 0.8557
7 KO-09 0.4363 0.4907 0.2085 0.6018 0.7972
8 KO-10 0.5736 0.6714 0.5293 0.7259 0.7497
9 KO-15 0.7909 0.6238 0.5721 0.7660 0.8587
10 KO-19 0.7882 0.6640 0.6388 0.7128 0.7261

11 WT-01 0.6665 0.4903 0.6063 0.7582 0.7647
12 WT-03 0.4095 0.4410 0.3824 0.6766 0.7375
13 WT-04 0.5726 0.4974 0.3623 0.6818 0.7381
14 WT-05 0.7589 0.6099 0.5844 0.8126 0.8637
15 WT-06 0.8467 0.6673 0.6343 0.8520 0.8466
16 WT-07 0.8296 0.6348 0.7835 0.9448 0.9224
17 WT-08 0.7436 0.6108 0.7746 0.8441 0.8555
18 WT-09 0.7116 0.5457 0.4694 0.7010 0.7869
19 WT-12 0.6774 0.5689 0.8506 0.7540 0.8411
20 WT-14 0.8772 0.6520 0.6915 0.8094 0.7979

Average 0.6855 0.5800 0.5328 0.7680 0.8099

Figure 2(a,f,k) shows three example epifluorescence images of mice dura mater (2 WT, 1 KO) vasculature networks used in the experiment along with a sparse set of user provided seed points sampling the foreground (yellow) and background (red). Figure 2(b,g,l) shows the gold standard GT binary image (foreground vessels in white and background in black). We show a comparison of our proposed multiquadric (MQ) spline-based semi-supervised interpolation (Figure 2(c,h,m)) with interactive segmentation methods based on Random Walker (RW) [10] (Figure 2(d,i,n)) and Graph-Cut (GC) [9] (Figure 2(e,j,o)). As can be seen, our method obtains good segmentations overall. The Random Walker interactive segmentation method fails to capture branching vessels and obtains spurious foreground regions. The Graph-Cut based approach fails to segment diffuse vessel boundaries, in particular venules where the vessels have very low contrast. Our approach obtained good segmentation results while requiring very few seed points for foreground vessels. The last column shows an image where all three methods produce low Dice scores, though our approach still achieves better segmentation.

Fig. 2.

Fig. 2

Quantitative comparison of semi-automated interactive segmentation results shown for three sample microvasculature images. From left to right: Input images (012606-ERbWT-07, 012706-ERbKO-06, 012606-ERbWT-04) with user selected foreground (fg) and background (bg) seeds, ground truth mask, Multiquadric (proposed method) MQ, Random walk [10] RW, Graph-cut [9] GC. No pre- or post-processing was performed to generate these segmentation results. Our proposed MQ method obtains better segmentations without spurious foreground segments and the Dice values indicate we obtain closer results to GT.

To evaluate the performance of different segmentation methods quantitatively, we use the Dice similarity coefficient,

Dice(P,Q)=2PQP+Q,

where P and Q are the representations of automatic and ground-truth (GT) segmentations. Values closer to one indicate better performance in terms of the physiologist expert verified gold standard. Table I shows the dice values for 10 wild-type and 10 knock-out mice dura mater epifluorescence images. Our method in general outperforms related interactive segmentation methods as well some well-known automatic thresholding methods [11] from the literature. Note that we applied no pre- or post-processing methods to generate these segmentation results.

IV. Conclusions

In this paper, we considered using a seed point-based interactive image segmentation method for microvasculature network extraction from epifluorescence imagery. Using a binary classification function based on multiquadric splines our method was able to obtain reliable segmentations using only sparse seed points from the user. Seed points are much easier and more intuitive for the user to provide. In this case for vessel segmentation seed points are clearly superior to other interactive methods avoiding the need to trace difficult to discern boundaries, drawing very thin scribbles or outlining many bounding boxes. Experimental results on a set of mice dura mater epifluorescence images indicate our results are better than other fully automatic or interactive segmentation methods. We are currently integrating our semi-automated interactive vessel segmentation method for more rapid ground truth generation and within an automatic image analytics framework for understanding the influence of physiological processes on microvasculature morphology.

Acknowledgments

This research was supported in part by the Award #1I01BX000609 from the Biomedical Laboratory Research & Development Service of the VA Office of Research and Development (VVG), the National Cancer Institute of the National Institutes of Health Award #R01CA160461 (VVG) and #R33EB00573 (KP). Mahidol-Oxford Tropical Medicine Research Unit is funded by the Wellcome Trust of Great Britain.

References

  • 1.Falcão AX, Udupa JK, Samarasekera S, Sharma S, Hirsch BE, Lotufo R. User-steered image segmentation paradigms: Live wire and live lane. Graphical Models and Image Processing. 1998;60(4):233–260. [Google Scholar]
  • 2.Mortensen EN, Barrett WA. Interactive segmentation with intelligent scissors. Graphical Models and Image Processing. 1998;60(5):349–384. [Google Scholar]
  • 3.Perez-Rovira A, MacGillivray T, Trucco E, Chin KS, Zutis K, Lupascu C, Tegolo D, Giachetti A, Wilson PJ, Doney A, Dhillon B. VAMPIRE: Vessel assessment and measurement platform for images of the retina. IEEE EMBC. 2011:3391–3394. doi: 10.1109/IEMBS.2011.6090918. [DOI] [PubMed] [Google Scholar]
  • 4.Turetken E, Benmansour F, Andres B, Glowacki P, Pfister H, Fua P. Reconstructing curvilinear networks using path classifiers and integer programming. IEEE Trans Patern Analysis and Machine Intelligence. 2016 doi: 10.1109/TPAMI.2016.2519025. [DOI] [PubMed] [Google Scholar]
  • 5.Meena S, Prasath VBS, Palaniappan K, Seetharaman G. Elastic body spline based image segmentation. Proc. IEEE Int. Conf. on Image Processing (ICIP); 2014; pp. 4378–4382. [Google Scholar]
  • 6.Meena S, Palaniappan K, Seetharaman G. Interactive image segmentation using elastic interpolation. IEEE Int. Symposium on Multimedia (ISM); 2015; pp. 307–310. [Google Scholar]
  • 7.Meena S, Palaniappan K, Seetharaman G. User driven sparse point based image segmentation. IEEE Int. Conf. on Image Processing (ICIP); 2016. [Google Scholar]
  • 8.Kirbas C, Quek F. A review of vessel extraction techniques and algorithms. ACM Computing Surveys. 2004;36(2):81–121. [Google Scholar]
  • 9.Boykov Y, Jolly M-P. Interactive graph cuts for optimal boundary & region segmentation of objects in N-D images. IEEE International Conference on Computer Vision; 2001; pp. 105–112. [Google Scholar]
  • 10.Grady L. Random walks for image segmentation. IEEE Trans on Pattern Analysis and Machine Intelligence. 2006;28(11):1768–1783. doi: 10.1109/TPAMI.2006.233. [DOI] [PubMed] [Google Scholar]
  • 11.Prasath VBS, Bunyak F, Haddad O, Glinskii O, Glinskii V, Huxley V, Palaniappan K. Robust filtering based segmentation and analysis of dura mater vessels using epifluorescence microscopy. 35th IEEE EMBC. 2013:6055–6058. [Google Scholar]
  • 12.Pelapur R, Prasath VBS, Bunyak F, Glinskii OV, Glinsky VV, Huxley VH, Palaniappan K. Multi-focus image fusion on epifluorescence microscopy for robust vascular segmentation. 36th IEEE EMBC. 2014:4735–4738. doi: 10.1109/EMBC.2014.6944682. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Prasath VBS, Pelapur R, Glinskii OV, Glinsky VV, Huxley VH, Palaniappan K. Multi-scale tensor anisotropic filtering of fluorescence microscopy for denoising microvasculature. IEEE International Symposium on Biomedical Imaging (ISBI); 2015; pp. 540–543. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Palaniappan K, Uhlmann J, Li D. Extensor based image interpolation. IEEE Int Conf Image Processing (ICIP) 2003;2:945–948. [Google Scholar]

RESOURCES