Abstract
In this paper, we present a feature/detail preserving color image segmentation framework using Hamiltonian quaternions. First, we introduce a novel Quaternionic Gabor Filter (QGF) which can combine the color channels and the orientations in the image plane. Using the QGFs, we extract the local orientation information in the color images. Second, in order to model this derived orientation information, we propose a continuous mixture of appropriate hypercomplex exponential basis functions. We derive a closed form solution for this continuous mixture model. This analytic solution is in the form of a spatially varying kernel which, when convolved with the signed distance function of an evolving contour (placed in the color image), yields a detail preserving segmentation.
1 Introduction
A major turning point in the field of mathematics, specifically, in algebra, was the birth of noncommutative algebra via Hamilton’s discovery of quaternions. This discovery was the precursor to new kinds of algebraic structures and has had an impact in various areas of mathematics and physics, including group theory, topology, quantum mechanics etc. More recently, quaternions have found use in computer graphics [1], navigation systems [2] and coding theory [3]. In computer graphics, quaternion representation of orientations facilitated computationally efficient and mathematically robust (such as avoiding the gimbal lock in Euler angle representation) applications. In image processing, quaternions have been used to represent color images [4,5]. This representation, together with the extension of the Fourier transform to hypercomplex numbers, has led to applications in color sensitive filtering [6], edge detection in color images [7,8] and cross correlation of color images [9]. The first definition of a hypercomplex Fourier transform was reported by Delsuc [10] in nuclear magnetic resonance. Later, different definitions for the quaternionic Fourier transform (QFT) have been introduced in [11] and [12] independently. Based on their definition of QFT, Bülow and Sommer generalized the concept of analytic signal to two dimensions and introduced quaternionic Gabor filters for use with scalar images [13]. They extended the Gabor filter by using two quaternion basis i and j to replace the single complex number i in the definition of the complex Gabor filter. However, they did not apply it to color images since their definition of QFT associates the imaginary units i and j to the local orientations in the image plane, which has no relationship to the color channels in a color image. Therefore, we follow an alternative definition for QFT proposed in [14] that utilizes simple formulae for the Fourier transform of complex-valued signals that can be computed efficiently. The use of this alternative QFT allows us to introduce a novel definition for the Quaternionic Gabor Filters that can be used to extract features from color images without conflicting interpretations being assigned to the hypercomplex units. An additional key contribution of the work presented here is that – we propose to model the derived orientation information (at a pixel) using a continuous mixture of exponential basis functions. Continuous mixture models have been presented in various contexts [15,16,17,18,19]. In this paper, we propose a continuous mixture model, where the mixing density is a Bingham density on the 3-dimensional sphere . To solve the continuous mixture integral in a closed form, we rewrite it using the matrix Fisher distribution on the manifold of special-orthogonal group. We use this closed form solution to construct a spatially-varying kernel for feature preserving segmentation of color images.
Color image segmentation is a relatively nascent area in computer vision. The literature on color image segmentation is not as extensive as that on gray-valued image segmentation. The key issue in color image segmentation is how to couple the information contained in the given color (red, green and blue) channels. Some published methods directly apply the existing gray level segmentation methods to each channel of a color image and then combine them in some way to obtain a final segmentation result. Chan et al. extend the Chan-Vese algorithm for scalar valued images to the vector valued case ([20]). In their work, in addition to the Mumford-Shah functional over the length of the contour, the minimization involves the sum of the fitting error over each color component. In the color snakes model, Sapiro extends the geodesic active contour model to the color images based on the idea of evolving the contour with a coupling term based on the eigenvalues of the Riemannian metric of the underlying manifold ([21]).
In this paper, we adopt the quaternion framework for representing color images since it offers scope to process color images holistically, rather than as separate color space components, and thereby handles the coupling between the color channels. Moreover, trichromatic theory of human color vision suggests vector mathematics as a natural tool to analyze color images. For a detailed discussion and motivation on quaternion representation of color images, we refer the reader to [7,9]. The key innovation of our work here is a holistic approach to color image segmentation using a quaternion framework to extract the local orientation and to model the derived information using a continuous mixture in the unit quaternion space. The proposed segmentation kernel does not use any prior information, and yet yields high quality results. Another contribution of this paper is a quaternion Gabor filter for the use with color images. We present our experimental results on some images drawn from the Berkeley Segmentation Data Set [22] along with F-measure plots for quantitative validation. We also compare our method with the mean shift algorithm in [23].
The remainder of this paper is structured as follows: We briefly describe the quaternion algebra and quaternion Fourier transform – needed for defining the QGF – in Section 2. We also develop a novel definition for QGFs in this section. In Section 3, we introduce the continuous mixture model for quantifying the derived orientation information. Then, in Section 4, we present the experimental results along with the quantitative evaluation depicting the merits of the proposed approach. Lastly, in Section 5 we summarize our contributions.
2 Local Orientation Analysis Using QGFs
2.1 Quaternions
In this section, we present background material on quaternions and the associated algebra which will be used in developing the local orientation analysis using QGFs.
Higher dimensional complex numbers are called hypercomplex and defined as
| (1) |
where ik is orthonormal to il for k ≠ l in an N+1 dimensional space. The Hamiltonian quaternions are unitary -algebra; the basic algebraic form for a quaternion is:
| (2) |
where , the field of real numbers, and i, j, k are three imaginary numbers. can be regarded as a 4-dimensional vector space over with the natural definition of addition and scalar multiplication. The set {1,i,j,k} is a natural basis for this vector space. is made into a ring by the usual distributive law together with the following multiplication rules:
| (3) |
If we denote the scalar and vector parts of a quaternion q by Sq and Vq respectively, the product of two quaternions q and p can be written as
| (4) |
where the · and × indicate the vector dot and cross products respectively. The conjugate of a quaternion, denoted by *, simply negates the vector part, q* = q0 − q1i − q2j − q3k. The norm of a quaternion q is . A quaternion with unit norm is called unit quaternion. Hamilton called a quaternion with zero scalar part a pure quaternion. We can give an inner product structure to if we define:
| (5) |
Using the inner product, the angle α between two quaternions can be defined as:
| (6) |
Any quaternion can be written in polar form
| (7) |
where μ is a unit pure quaternion.
Quaternion representation of color image pixels has been proposed independently in [4,5]. They encode the color value of each pixel in a pure quaternion. For example, a pixel value at location (n, m) in an RGB image can be given as a quaternion-valued function f(n, m) = R(n, m)i + G(n, m)j + B(n, m)k where R, G and B denote the red, green and blue components of each pixel respectively. This 3-component vector representation yields a system which has well-defined and well-behaved mathematical operations to apply on color images holistically.
2.2 Quaternionic Gabor Filters
In order to develop complex Gabor filters in higher-dimensional algebras, we first need to analyze corresponding generalization of the Fourier transform. The very first definition of a hypercomplex Fourier transform was due to Delsuc [10]. Later, Ell [11] and Bülow [12] independently introduced the quaternion Fourier transform, respectively as follows:
| (8) |
| (9) |
In [14], another definition for QFT was proposed with the motivation of using a simple generalization of the standard complex operational formulae for convolution in color images:
| (10) |
where μ is a unit pure quaternion. For color images in RGB space, μ is chosen as (note that both the luminance and the chromaticity information is still preserved; this is still a full color image processing, not a grayscale image processing.).
Following the QFT definition above, we introduce a novel Quaternionic Gabor Filter.
Definition 1 (Quaternionic Gabor Filter)
The impulse response of a quaternionic Gabor filter is a Gaussian modulated with the basis functions of the QFT:
| (11) |
where with N being the normalization constant, λ being the aspect ratio.
The center frequency of the QGF is given by and its orientation is θ = arctan(v0/u0).
For an application of QGFs, consider the Fig. 1. If we apply a horizontally oriented QGF to an image, then we obtain high responses wherever there are horizontally oriented features. Fig. 1 illustrates the magnitude response of such a horizontally oriented QGF convolved with an image in quaternion form. Quaternion convolution is equivalently performed by using QFT. Note that all the calculations follow the rules of the quaternion algebra.
Fig. 1.

Quaternion convolution of a QGF (with an orientation of π) with a color image from Berkeley Data Set ([22])
In an image, it is possible to have a color contrast without having a luminance contrast. In a black-and-white version of such an image, the two different colored objects appear blended into a single one. In Fig. 2, we demonstrate that the proposed Quaternionic Gabor Filters can extract the local orientation information from a constant luminance image as well. Fig. 2a shows a synthetic color image where all pixels have the same luminance value, but the chromaticity inside the object differs from the chromaticity outside. The luminance channel shows that all pixels have the same value (see Fig. 2b). We applied 10 QGFs to the quaternion representation of this color image. The sum of the magnitude responses of 10 QGFs is shown in Fig. 2d. Although a black-and-white version (Fig. 2c) of the input image is a uniform gray without any changes in orientation, the proposed QGFs successfully derive the orientation information in the color version, showing that they are well suited for analyzing color images and the result is not a grayscale image processing.
Fig. 2.
Application of Quaternionic Gabor Filters across equal luminance: (a) a synthetic color image where the object and the background are of equal luminance, (b) luminance channel, (c) a grayscale version of (a), (d) the sum of the magnitude responses of QGFs applied to the color image in (a)
We have chosen the unit pure quaternion direction μ in QGF as . However, this choice does not mean that the proposed quaternion framework is processing the sum of the RGB values. Also note that the convolution between a QGF and a quaternion representation of a color image is performed following the rules of quaternion algebra. At each pixel, the quaternion-valued filter is multiplied with the color direction of that pixel through a quaternion product. Hence, QGF handles the coupling between the channels while, at the same time, processing all information in a color image. Fig. 3a shows a color image where (R + G + B)/3 is the same for all pixels. As shown in Fig. 3c, the proposed framework can accurately extract the orientation information.
Fig. 3.

(a) A synthetic color image where (R + G + B)/3 is the same everywhere. (b) Grayscale image which shows (R + G + B)/3 values. (c) The sum of the magnitude responses of QGFs applied to the color image in (a).
In the next section, the magnitudes of the quaternion-valued filter responses are modeled using a continuous mixture of quaternionic exponential basis functions. We derive a closed form solution for this integral and use it to construct a spatially varying convolution kernel for detail preserving color image segmentation.
3 A Continuous Mixture Model on the Unit Quaternion Space
In the previous section, we introduced the QGFs to extract the local orientation information in a color image. The resulting responses over a sphere of directions are modeled in this section in a probabilistic framework. This framework is powerful and allows one to capture the complicated local geometries present in the image data and incorporate them into spatially varying segmentation kernels. We postulate that at each lattice point there is an underlying probability measure induced on the manifold of the unit quaternions.
The space of unit quaternions
| (12) |
is the 3-sphere in , it forms a group under multiplication and preserves the hermitian inner product. An appropriate choice for the kernel functions is exp(−cos(d(q, p))), where d(q, p) = 2cos−1(Sq*p) is the length of the shortest geodesic between quaternions q and p. It can also be called the angle of rotation metric for quaternions. Thus the proposed model is given by,
| (13) |
where dF := f(q)dq denotes the underlying probability measure with respect to the uniform distribution dq on . is the maximal value among all responses at an image location. In order to avoid an ill-posed inverse problem which requires recovering a distribution defined on the manifold of unit quaternions given the measurements , we impose a mixture of Bingham distributions on q as a prior. Manifold of the unit quaternions double-covers SO(3). Double-coverage can be interpreted as antipodal-symmetry; thus, Bingham distribution is a natural choice for quaternion priors. For statistical purposes, Bingham distribution is characterized as the hyperspherical analogue of the n-variate normal distribution; essentially it can be obtained by the “intersection” of a zero-mean normal density with the unit sphere in . Let q be a 4-dimensional random unsigned unit direction. q is distributed as if it has the Bingham density [24] given by,
| (14) |
where A is a 4 × 4 rotation matrix, L is a diagonal matrix with concentration values (which determine the amount of clustering around the mean directions) and 1F1 is a confluent hypergeometric function of matrix argument as defined in [25]. Using the relationship between and SO(3), Prentice [24] has shown that q has a Bingham density if and only if the corresponding rotation matrix, Q, in SO(3) has a matrix Fisher distribution. A random 3 × 3 rotation matrix Q is said to have a matrix Fisher distribution if it has the following pdf:
| (15) |
F is a 3 × 3 parameter matrix which encapsulates the concentration values and orientations, 0F1 is a hypergeometric function of matrix argument and can be evaluated using zonal polynomials. By using the distance on the manifold SO(3), the proposed model can be equivalently written in SO(3) instead of as:
| (16) |
where P is the rotation matrix corresponding to the orientation of the QGF, and
| (17) |
is a discrete mixture of matrix Fisher densities over the rotation matrix Q with respect to the uniform distribution on SO(3). We choose to change the prior to this mixture of matrix Fisher densities since the matrix Fisher density is unimodal and will not be able to handle orientational heterogeneity. However, note that the model in (16) is still a continuous mixture model. N here corresponds to the resolution of the SO(3) discretization and not the number of dominant local orientations. We observed that the kernel of the matrix Fisher distribution can be utilized to derive a closed form solution for the right-hand side, leading to:
| (18) |
We can formulate the computation of this analytic form as the solution to a linear system Aw = y, where contains the normalized measurements obtained via an application of M QGFs to the color image, A is an M × N matrix with
| (19) |
and w =(wi) is the unknown weight vector. The weights in the mixture can be solved using a sparse deconvolution technique, a non-negative least squares (NNLS) minimization which yields an accurate and sparse solution. A sparse solution is what is expected at each image lattice point since local image geometry does not have a large number of edges meeting at a junction. Once w is estimated for the given data at each lattice point, we can construct the convolution kernel for color image segmentation. We represent an evolving curve C (in a curve evolution framework) by the zero level set of a Lipschitz continuous function . So, C = {(x, y) ∈ Ω: ϕ(x, y) = 0}. We choose ϕ to be negative inside C and positive outside. C is evolved using the following update equation:
| (20) |
where Q(x) is the convolution kernel obtained from (18) by setting the matrix P to the rotation matrix corresponding to the angle that the coordinate vector x makes with the x-axis. Note that this formulation yields a spatially varying convolution kernel since the w vector is estimated at each lattice point in an image.
4 Experiments and Comparisons
In this section, we present several experimental results of our approach (named as QGmF – Quaternionic Gabors with matrix Fisher density) and compare its performance with a state-of-the-art technique in segmentation: the mean shift algorithm presented in [23]. We compare with this algorithm since it presents a tool for feature space analysis. In the following experiments, for each algorithm the segmentations that yield the highest F-measure values are shown.
In QGmF, we can adjust the level of details/features, which reveal themselves in the output of the QGF applied to the color images. To do this, we introduce a threshold parameter on the magnitude of the filter responses. A relatively low threshold results in a segmentation capturing the low contrast details in small scales. Fig. 4c illustrates such an example where the threshold is set to 0.005. Mean shift algorithm achieves a successful result as shown in Fig. 4b. However, uniform regions are not consistently preserved, e.g. the sky is mis-segmented; the boundaries divide the regions which are actually composed of connected components, as can be seen between the clouds. Moreover, the barricade is mis-segmented with the pavement. Fig. 4d shows a better segmentation using our QGmF method (note that the man riding the horse and the crowd are clearly segmented, also note the accurate localization of the boundary between the barricade and the pavement). Fig. 4e shows the pixels correctly labeled as belonging to the segmentation boundary by QGmF.
Fig. 4.
(a) Segmentation performed by a human subject (from the ground truth in the Berkeley Segmentation Data Set [22]). (b) Segmentation result of the mean shift algorithm. (c) Segmentation result of the QGmF method with a low threshold value of 0.005. (d) Segmentation result of the QGmF method with a threshold value of 0.02. (e) True positives (TP) map of (d) with respect to (a).
Another visual comparison is provided in Fig. 5. Since the mode detection calculations in mean shift algorithm are determined by global bandwidth parameters, the algorithm tends to miss small-scale details in some places or over-segment the uniform regions (see the small areas on the starfish which are mis-segmented as being a part of the outer region in Fig. 5b). On the other hand, QGmF maintains coherence within textured regions while preserving the small scale details around the boundaries as shown in Fig. 5d. Once again, a low threshold value results in over-segmentation (see Fig. 5c).
Fig. 5.

(a) Human segmentation (from the ground truth in the Berkeley Segmentation Data Set). (b) Output of the mean shift algorithm. (c) Output of the QGmF method with a threshold of 0.005. (d) Output of the QGmF method with a threshold of 0.025.
In Fig. 6b, note the regions which have almost equal luminance but different chromaticity. Both Fig. 6c and 6d are over-segmented; however, 6e shows a high quality result which is very close to the human segmentation (see Figs. 6e and 6a). In Fig. 7b, mean shift segmentation algorithm mis-segments the heads of the astronauts, and the boundaries of the astronaut on the left are missed. As visually evident, QGmF performs better than the competing method.
Fig. 6.
(a) Human segmentation (from the ground truth data). (b) Luminance channel of the color image. (c) Output of the QGmF method for the color image (QGF threshold = 0.005). (d) Output of the mean shift segmentation (e) Output of the QGmF method (QGF threshold = 0.025).
Fig. 7.
(a) Segmentation performed by a human subject (from the ground truth in the Berkeley Segmentation Data Set). (b) Output of the mean shift segmentation. (c) Output of the QGmF method.
In order to have a quantitative evaluation of our approach, we present the highest F1-measure (or Dice’s Coefficient) scores of our method and the competing method for the above images, as shown in Table 1. Furthermore, in Fig. 8 we present a sensitivity analysis using the F1-measures on 100 color images (including the images above) drawn from the Berkeley Segmentation Data Set [22]. F1-measure, commonly known as the F-measure, is the evenly weighted harmonic mean of precision and recall scores. The human segmentations from the Berkeley Segmentation Data Set were used as the ground truth in the evaluation. The boundaries between two segmentations are matched by examining a neighborhood within a radius of ε = 2. In the QGmF, we tested the effect of the threshold parameter (for values in [0.005, 0.05]) on the QGF responses. For the mean shift segmentation algorithm, we tested the effect of the kernel bandwidth parameters: hs, space bandwidth; and hr, range bandwidth. They determine the resolution of the mode selection and the clustering. We tested for 3 different hs values in [7, 10, 20]. In each curve for the mean shift algorithm, x-axis shows the variations of the hr values in [4, 20] arranged in ascending order from left to right. Experimentation showed that the F-measure scores change significantly with respect to the bandwidth parameters in mean shift segmentation algorithm, making it difficult to choose the range of the parameters which can provide good results. In QGmF, we observed that a low threshold value for QGF results in over-segmentation which is characterized in the curves by low F-measure, whereas any level of detail for segmentation can be achieved by tuning the threshold parameter. We notice that the scores of QGmF are higher than the competing method.
Table 1.
F1-measure (or Dice’s Coefficient) values
| Image | QGmF | MeanShift |
|---|---|---|
| Astronauts | 0.74 | 0.56 |
| Starfish | 0.81 | 0.52 |
| Parade | 0.76 | 0.65 |
| Buffalo | 0.86 | 0.67 |
Fig. 8.

F-measure plots for mean shift segmentation algorithm and QGmF convolution-based kernel method. For QGmF, x-axis shows the variations of the threshold parameter for QGF responses, arranged in order from left to right, while y-axis shows the corresponding F-measure value. The threshold for QGF varies within [0.005, 0.05]. For mean shift segmentation algorithm, the corresponding values for the space bandwidth parameter (hs) are shown in the plot, points along each curve correspond to the variations of the range bandwidth parameter (hr) in [4, 20].
5 Conclusion
In this paper, we addressed the problem of feature/detail preserving segmentation in color images, and presented a hypercomplex representation framework for capturing the complicated local geometry contained in a color image via the use of a spatially varying convolution filter. We introduced a novel quaternionic Gabor filter to extract the local orientation information in color images. This information is then represented by a continuous mixture of hypercomplex exponential basis functions, where the mixing density is assumed to be a mixture of Bingham densities. This integral, when expressed using matrix Fisher densities on the Stiefel manifold, can be solved in a closed form leading to the QGmF kernel. Additionally, the same kernel when iteratively applied to a signed distance function representation of an active contour yields feature/detail preserving segmentations of the color images. Our method does not use any prior information to perform segmentations, and yet delivers superior performance in comparison to a state-of-the-art method. We validated the performance of the proposed method in the experimental section using color images from the Berkeley Segmentation Data Set and showed that our model yields results significantly close to the segmentations performed by human subjects.
References
- 1.Shoemake K. Animating rotation with quaternion curves. SIGGRAPH Comput. Graph. 1985;19(3):245–254. [Google Scholar]
- 2.Kuipers JB. Quaternions and Rotation Sequences: A Primer with Applications to Orbits, Aerospace and Virtual Reality. Princeton University Press; Princeton: 2002. [Google Scholar]
- 3.Sethuraman BA, Rajan BS, Member S, Shashidhar V. Full-diversity, highrate space-time block codes from division algebras. IEEE Trans. Inform. Theory. 2003;49:2596–2616. [Google Scholar]
- 4.Pei SC, Cheng CM. A novel block truncation coding of color images by using quaternion-moment-preserving principle. Connecting the World; IEEE International Symposium on Circuits and Systems, ISCAS 1996; May 1996.1996. pp. 684–687. [Google Scholar]
- 5.Sangwine S. Fourier transforms of colour images using quaternion or hypercomplex numbers. Electronic Lett. 1996;32(21):1979–1980. [Google Scholar]
- 6.Sangwine S, Ell T. Colour image filters based on hypercomplex convolution. IEE Proceedings Vision, Image and Signal Processing. 2000;147(2):89–93. [Google Scholar]
- 7.Ell T, Sangwine S. Hypercomplex fourier transforms of color images. IEEE Transactions on Image Processing. 2007;16(1):22–35. doi: 10.1109/tip.2006.884955. [DOI] [PubMed] [Google Scholar]
- 8.Sangwine S. Colour image edge detector based on quaternion convolution. Electronic Lett. 1998;34(10):969–971. [Google Scholar]
- 9.Moxey C, Sangwine S, Ell T. Hypercomplex correlation techniques for vector images. IEEE Transactions on Signal Processing. 1953;51(7):1941. [Google Scholar]
- 10.Delsuc MA. Spectral representation of 2d nmr spectra by hypercomplex numbers. Journal of Magnetic Resonance. 1988;77(1):119–124. [Google Scholar]
- 11.Ell T. Quaternion-fourier transforms for analysis of two-dimensional linear time-invariant partial differential systems. Proceedings of the 32nd IEEE Conference on Decision and Control; December 1993.1993. pp. 1830–1841. [Google Scholar]
- 12.Bülow T, Sommer G. Das konzept einer zweidimensionalen phase unter verwendung einer algebraisch erweiterten signalrepräsentation. Mustererkennung 1997, 19. DAGM-Symposium; London, UK. Heidelberg: Springer; 1997. pp. 351–358. [Google Scholar]
- 13.Bülow T, Sommer G. Multi-dimensional signal processing using an algebraically extended signal representation. In: Sommer G, editor. AFPAC 1997. LNCS. vol. 1315. Springer; Heidelberg: 1997. pp. 148–163. [Google Scholar]
- 14.Sangwine S, Ell TA. The discrete fourier transform of a colour image. In: Blackledge JM, Turner MJ, editors. Image Processing II: Mathematical Methods, Algorithms and Applications. 2000. pp. 430–441. [Google Scholar]
- 15.Jian B, Vemuri B. Multi-fiber reconstruction from diffusion mri using mixture of wisharts and sparse deconvolution. Inf. Process Med. Imaging. 2007;20 doi: 10.1007/978-3-540-73273-0_32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Jian B, Vemuri BC. A unified computational framework for deconvolution to reconstruct multiple fibers from diffusion weighted mri. IEEE Trans. Med. Imaging. 2007;26(11):1464–1471. doi: 10.1109/TMI.2007.907552. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Jian B, Vemuri BC, Özarslan E, Carney P, Mareci T. A novel tensor distribution model for the diffusion-weighted mr signal. Neuroimage. 2007;37(1):164–176. doi: 10.1016/j.neuroimage.2007.03.074. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Subakan ÖN, Jian B, Vemuri BC, Vallejos CE. Feature preserving image smoothing using a continuous mixture of tensors. IEEE International Conference on Computer Vision; Rio de Janeiro, Brazil. October 2007; [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Subakan ÖN, Vemuri BC. Image segmentation via convolution of a level-set function with a Rigaut kernel. IEEE Conference on Computer Vision and Pattern Recognition; Anchorage, Alaska. June 2008; [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Chan TF, Yezrielev B, Vese LA. Active contours without edges for vector-valued images. Journal of Visual Communication and Image Representation. 2000;11:130–141. [Google Scholar]
- 21.Sapiro G. Color snakes. Comput. Vis. Image Underst. 1997;68(2):247–253. [Google Scholar]
- 22.Martin D, Fowlkes C, Tal D, Malik J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. IEEE Intl. Conf. on Computer Vision; July 2001.2001. pp. 416–423. [Google Scholar]
- 23.Comaniciu D, Meer P. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002;24(5):603–619. [Google Scholar]
- 24.Prentice MJ. Orientation statistics without parametric assumptions. Journal of the Royal Statistical Society. Series B (Methodological) 1986;48(2):214–222. [Google Scholar]
- 25.Herz CS. Bessel functions of matrix argument. The Annals of Mathematics. 1955;61(3):474–523. [Google Scholar]




