Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Jul 20.
Published in final edited form as: Med Image Comput Comput Assist Interv. 2011;14(Pt 3):272–279. doi: 10.1007/978-3-642-23626-6_34

Segmenting Images by Combining Selected Atlases on Manifold

Yihui Cao 1, Yuan Yuan 1, Xuelong Li 1, Baris Turkbey 2, Peter L Choyke 2, Pingkun Yan 1
PMCID: PMC7370860  NIHMSID: NIHMS1600418  PMID: 22003709

Abstract

Atlas selection and combination are two critical factors affecting the performance of atlas-based segmentation methods. In the existing works, those tasks are completed in the original image space. However, the intrinsic similarity between the images may not be accurately reflected by the Euclidean distance in this high-dimensional space. Thus, the selected atlases may be away from the input image and the generated template by combining those atlases for segmentation can be misleading. In this paper, we propose to select and combine atlases by projecting the images onto a low-dimensional manifold. With this approach, atlases can be selected according to their intrinsic similarity to the patient image. A novel method is also proposed to compute the weights for more efficiently combining the selected atlases to achieve better segmentation performance. The experimental results demonstrated that our proposed method is robust and accurate, especially when the number of training samples becomes large.

1. Introduction

Radiation therapy is often used in the treatment of cancers. It is important to acquire the accurate location of the target organ during the therapy. In clinical routine, this localization task is often performed by manually segmenting a series of images of a patient. However, manual segmentation is a tedious and time consuming procedure. In recent year, atlas-based methods, for their full automation and high accuracy, have become widely used approaches for medical image segmentation [1].

Generally, an atlas consists of a raw image and a corresponding label image. The basic idea of atlas based segmentation is that if a raw image of atlas is highly similar to the target image, the corresponding label image can be used to segment the target image through mapping. However, in practice, it is difficult to find a highly similar atlas. In the existing works, each atlas is first matched to the target image, resulting in a deformed image close to the target image. Then the “most similar” individual atlas is selected and used for segmentation [1]. It is also straightforward to extend the scheme to multiple atlases, where more than one atlases can be selected and combined for segmentation [2].

It has been shown that using multiple atlases can yield more accurate results than using a single atlas [3]. Aljabar et al. [4] investigated different atlas selection strategies and showed that selection strategy is one of the significant factors affecting the performance. Roche et al. [5] studied three intensity-based image similarity measurements (mean square distance, correlation coefficient, mutual information) and showed that mutual information is the “best” option assuming a statistical relationship. As for multiple atlases combination, the Weighted Voting algorithm was the most widely used method in the previous works [2,3], where computing the optimal weights of the corresponding atlases is the key in the such an algorithm.

Existing works mainly addressed the problems of atlas selection and weighted combination in the high-dimensional image space [1,2,3]. However, it was shown that the Euclidean distance in high-dimensional space may not accurately reflect the intrinsic similarity between the images [6]. In addition, it showed that the geodesic distance in the high-dimensional manifold space is a better measurement for computing the similarity between images.

However, in practice, it is difficult to compute the geodesic distance directly in the high-dimensional space. An approach to solve this problem is to project the high-dimensional data onto a low-dimensional manifold space and preserving the local geometry in the same time using nonlinear dimensionality reduction techniques [6,7,8]. In such a low-dimensional space, the Euclidean distance can approximately reflect the intrinsic similarity between the images. Based on this fact, in this paper, we propose a new method to select atlases by relying on the intrinsic similarity between the images. In addition, we develop a novel method to compute the weights for combining the selected images into a single template to achieve better segmentation performance. The basic idea is to reconstruct the input patient image using the selected images in the low-dimensional manifold space.

The rest of this paper is organized as follows. In Section 2, we describe the workflow of our method and present the proposed atlas selection and combination methods. Our data and experimental results are provided in Section 3. Some conclusions about our work are drawn in Section 4.

2. Method

In this section, we briefly show the workflow of our proposed method as shown in Fig. 1. The upper row of the figure shows the process about the raw images of atlas, and the lower row shows the process about the label images of atlas. In our work, the raw image of atlas is denoted by A and the corresponding label image is denoted by L. The image to be segmented namely the patient image is denoted by P. The whole method includes three main steps: atlas registration, atlas selection, and atlas combination.

Fig. 1.

Fig. 1.

The workflow of our method. The upper row shows the process for analyzing the raw images of the atlases, and the lower row shows the processing steps of the corresponding label images.

2.1. Atlas Registration

The first stage is registration that each raw image of atlas Ai (i = 1, 2, …, N) is matched to the patient image P. The registration includes a 2D rigid registration and a non-rigid B-spline deformable registration. It yields a set of transformation parameters during the registration. Using the transformation parameters, each raw image Ai and its corresponding label image Li are transformed to A^i and L^i, respectively, which are close to the patient image.

2.2. Manifold Learning for Atlas Selection

In terms of the theory of manifold learning, the raw images of atlas and the patient image can form a manifold in high-dimensional space, namely image space [6, 7, 8]. Take the “Swiss roll” structure as an example to illustrate the motivation of using manifold learning for atlas selection, as shown in Fig. 2. The points in the left 3D “Swiss roll” represent the images in the high-dimensional manifold. And the points (P, A1, A2) in the manifold respectively represents a patient image and two atlases. It can be seen that in the Euclidean space the distance PA1>PA2, while in manifold space PA1< <PA2. By manifold learning techniques, the high-dimensional data can be projected onto a low-dimensional space preserving the neighborhood information as shown in the right of graph. It can be seen that the distances of point P to the other points can be apparently reflected in the low-dimensional space. That is to say, the intrinsic similarity of images is better reflected in low-dimensional space than in high-dimensional space.

Fig. 2.

Fig. 2.

An example illustrating the motivation of using the manifold learning for atlas selection in our method. In the Euclidean space the distance PA1 >PA2, while in manifold space PA1 < <PA2.

There were many classical manifold learning algorithms, such as the linearly projection of Principal Component Analysis (PCA) and the nonlinear ISOMAP [6] and LLE [7] algorithms. In this work, we applied the Locality Preserving Projections (LPP) [8] algorithm in manifold learning. For it share the advantages of both linear and nonlinear algorithms, namely linearly projecting high-dimensional data onto a low-dimensional space and preserving local neighborhood information as well.

In our work, we set all images to the same size of m pixels. And each raw image of atlas is represented by a high-dimensional vector hi the size of which is m. Then the high m-dimensional vectors hi, are projected to low n-dimensional vectors li (n < < m). The projection yields a transformation matrix W. Then the patient image P projects onto the same low-dimensional space. The projection is defined as follows:

pl=WT×ph, (1)

where the vectors and ph represent the patient image in high- and low-dimensional space, respectively.

In the low-dimensional space, K nearest vectors lk, (k = 1, 2, …, K) of pl are selected by the Euclidean distance. Correspondingly, K most similar raw images of atlas A^k to the patient image P are selected. Corresponding to A^k, the K label images L^k are selected from L^i. And these selected label images will be used for combination in the next stage.

2.3. Atlas Combination

In atlas combination stage, we employ the robust Weighted Voting algorithm [2, 3]. The algorithm is defined as follows:

L=k=1KwkL^k, (2)

where L is the result of combination image, and w=(w1, w2, …, wk) is a vector representing the weights of the selected label images L^k. Apparently the weights are the key in such the algorithm.

It assumes that the vectors lk, are distributed in a linear low-dimensional space [7]. Combining the selected label images into a single template can be considered as using the vectors lk to reconstruct the vector pl. Thus, computing the optimal weights of combination can be solved by minimizing the linear reconstruction error [9]. The error ε is defined as follows:

argminw1,,wKε=plk=1Kwklk22, (3)

with the weights constraint wk=1. Apparently, it is a constrained least squares problem. We use the Lagrange multiplier algorithm and introducing a Gram matrix G to solve this problem.

G=(pl1TL)T(pl1TL), (4)

where 1 is a vector of ones with the same size of pl, L is a matrix that consists of lk. Then the problem can be solved by the following solution:

w=G111TG11. (5)

Applying the weights w to the Weighted Voting algorithm (2), it yields a single combination image L. Since the edge of L may not be smooth, several basic morphological methods are used to address the problem. Finally, it yields the template Ldst which is used for the final patient image segmentation.

3. Experiments

3.1. Data and Measurement

In our experiments, the proposed method was tested on 40 MR prostate images, which were taken from 40 different patients. Each image has 512×512 pixels. The binary label images, which were manually segmented by an expert, were considered as the ground truth. The Dice Similarity Coefficient (DSC) was used to evaluate the segmentation performance of our proposed method quantitatively. The evaluation criterion of DSC is defined as follows:

DSC(A,B)=2|AB||A|+|B|, (6)

where A is the ground truth image, B is the result of an automatically segmented image, and |·| denotes the area of the target region. The DSC value varies between 0 and 1 and a higher value indicates a better segmentation.

3.2. Results

Note that the atlas selection and the weights computation were all performed in a low-dimensional space in our method. In order to decrease the computation time, we extracted the region of interest around the image center with the size of 256×256 pixels. That was to say, all atlas images formed a 65536-dimensional manifold space. All data were projected onto a low-dimensional space using manifold learning. We evaluated the segmentation performance over dimensionality varying from 1 to 39, where 39 is the upper bound set by the number of training images. We found that the value of 38 yielded the best performance in our experiments, which was then adopted. Before the projection, all images were preprocessed by histogram equalization to reduce the influence of light. For the relatively limited data, a leave-one-out approach was employed to validate the performance of our method. At each time, one of the 40 images was used as the patient image and the rest were applied as atlases.

In our experiments, the selected number of atlas was the only variable parameter. Therefore, we gave the results based on the selected number of atlas changing from 1 to 39. In Fig. 3, it was a box-and-whisker diagram showing the distributions of our results. At each column, it depicts five-number summaries: the smallest observation, lower quartile, median, upper quartile, and largest observation. It can be seen that the median DSC varying between 0.88 and 0.92, and the values were larger than 0.90 in most cases.

Fig. 3.

Fig. 3.

A box-and-whisker diagram showing the distributions of our results

We compared the performance of our method and the state-of-the-art method proposed by Klein et al. [2]. The main difference between the two methods is that the atlas selection and the weights computation in our method are all performed in a low-dimensional space. For a fair comparison, two methods were carried out based on the same registration. When the atlas selected number ranged from 1 to 39, the average DSC values of 40 experiments were shown in Fig. 4. It can be seen that the DSC values of our method were higher than Klein’s in most cases. Especially, the problem of over-fitting apparently occurred in Klein’s method when the number of the samples exceeded a limitation. It has been shown that our method is superior to Klein’s method in the case of relative large sample (p <0.01) and is not inferior in the case of small sample.

Fig. 4.

Fig. 4.

The comparison of our method and Klein’s method by varying the number of atlases

When we set the selected number of atlas to 35, several visual segmentation results were obtained and shown in Fig. 5. The qualitative results of Klein’s are in top row and our method’s are in bottom row. The red contours represent the ground truth, and the yellow contours are automatically delineated by the two methods.

Fig. 5.

Fig. 5.

Qualitative results of Klein’s (in top row) and our method (in bottom row). The red contours represent the ground truth. The yellow contours are automatically delineated by the two methods.

4. Conclusion

In this paper, we proposed a novel atlas-based method for automatic medical image segmentation. We proposed to select atlases according to the intrinsic similarity to the patient image in the low-dimensional space. A novel method was also proposed to compute the weights for more efficiently combining the selected atlases to achieve better segmentation performance. By comparing with the state-of-the-art method [2], the experimental results show our method is robust and promising. In future work, we will test our algorithm on other datasets and further extend our method to 3D medical images segmentation.

Acknowledgment

The presented research work is supported by the National Basic Research Program of China (973 Program) (Grant No. 2011CB707000) and the National Natural Science Foundation of China (Grant No. 61072093).

References

  • 1.Wu M, Rosano C, Lopez-Garcia P, Carter CS, Aizenstein HJ: Optimum template selection for atlas-based segmentation. NeuroImage 34(4), 1612–1618 (2007) [DOI] [PubMed] [Google Scholar]
  • 2.Klein S, van der Heide UA, Lips IM, van Vulpen M, Staring M, Pluim JPW: Automatic segmentation of the prostate in 3D MR images by atlas matching using localized mutual information. Med. Phys. 35(4), 1407–1417 (2008) [DOI] [PubMed] [Google Scholar]
  • 3.Artaechevarria X, Muñoz-Barrutia A, Ortiz-de-Solórzano C: Combination strategies in multi-atlas image segmentation: Application to brain MR data. IEEE Trans. Medical Imaging 28(8), 1266–1277 (2009) [DOI] [PubMed] [Google Scholar]
  • 4.Aljabar P, Heckemann R, Hammers A, Hajnal J, Rueckert D: Classifier selection strategies for label fusion using large atlas databases In: Ayache N, Ourselin S, Maeder A (eds.) MICCAI 2007, Part I. LNCS, vol. 4791, pp. 523–531. Springer, Heidelberg: (2007) [DOI] [PubMed] [Google Scholar]
  • 5.Roche A, Malandain G, Ayache N: Unifying maximum likelihood approaches in medical image registration. Int. J. Imag. Syst. Technol, 71–80 (2000) [Google Scholar]
  • 6.Tenenbaum JB, de Silva V, Langford JC: A global geometric framework for nonlinear dimensionality reduction. Science 290 (2000) [DOI] [PubMed] [Google Scholar]
  • 7.Roweis S, Saul LK: Nonlinear dimensionality reduction by locally linear embedding. Science 290 (2000) [DOI] [PubMed] [Google Scholar]
  • 8.He X, Niyogi P: Locality preserving projections. In: Proc. Neural Information Processing Systems (2003) [Google Scholar]
  • 9.Chang H, Yeung DY, Xiong Y: Super-resolution through neigbor embedding. In: Proc. IEEE CVPR, vol. 1, pp. 275–282 (2004) [Google Scholar]

RESOURCES