Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Jan 13.
Published in final edited form as: Inf Process Med Imaging. 2011;22:525–537. doi: 10.1007/978-3-642-22092-0_43

A Unified Framework for Joint Segmentation, Nonrigid Registration and Tumor Detection: Application to MR-Guided Radiotherapy*

Chao Lu 1, Sudhakar Chelikani 2, James S Duncan 1,2
PMCID: PMC3889153  NIHMSID: NIHMS493225  PMID: 21761683

Abstract

Image guided external beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose to the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges. Furthermore, the presence of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, we present a unified MAP framework that performs automatic segmentation, nonrigid registration and tumor detection simultaneously. It can generate a tumor probability map while progressively identifing the boundary of an organ of interest based on the achieved transformation. We demonstrate the approach on a set of 30 T2-weighted MR images, and the results show that the approach performs better than similar methods which separate the registration and segmentation problems. In addition, the detection result generated by the proposed method has a high agreement with the manual delineation by a qualified clinician.

1 Introduction

External beam radiation therapy(EBRT) is the primary modality of treatment for cervical cancer[12]. Imaging systems capable of visualizing patient soft-tissue structure in the treatment position have become the dominant method of positioning patients for both conformal and stereotactic radiation therapy. Traditional CT images suffer from low resolution and low soft tissue contrast, while Magnetic Resonance (MR) imaging can characterize deformable structure with superior visualization and differentiation of normal soft tissue and tumor-infiltrated soft tissue. In addition, advanced MR imaging with modalities such as diffusion, perfusion and spectroscopic imaging has the potential to better localize and understand the disease and its response to treatment. Therefore, magnetic resonance guided radiation therapy (MRgRT) systems with integrated MR imaging in the treatment room are now being developed as an advanced system in radiation therapy[7].

The goal in radiotherapy cancer treatment is to deliver as much dose as possible to the clinical target volume (CTV), while trying to deliver as little dose as possible to surrounding organs at risk[4,10]. When higher doses are to be delivered, precise and accurate targeting is essential because of unpredictable inter- and intra-fractional organ motions over the process of the weekly treatments, which require accurate nonrigid mapping between treatment and planning day. However, the presence of pathologies such as tumors may violate registration constraints and cause registration errors, because the abnormalities often invalidate gray-level dependency assumptions that are usually made in the intensity-based registration. In addition, the registration problem is challenging due to the missing correspondences caused by tumor regression. Incorporating some knowledge about abnormalities can improve the registration. Meanwhile, segmentation of the soft tissue structure over the course of EBRT is challenging due to structure adjacency and deformation over treatment.

Some initial work has been performed in simultaneously registration and segmentation or incorporating abnormality detection. Chelikani et.al. integrated rigid 2D portal to 3D CT registration and pixel classification in an entropy-based formulation [3]. Yezzi et.al. integrated segmentation using deformable model with rigid and affine registration [17]. Pohl et.al. performed voxel-wise classification and registration which could align an atlas to MRI[14]. The automatic mass detection problem mainly focuses on the application on mammography[13]. Hachama et.al. proposed a registration technique which neglects the deformation generated by abnormalities[5], which is not always the case, and raises questions regarding the applicability of the method.

In this paper, we present a probability based technique as an extension to the work in [11]. Our model is based on a MAP framework which can achieve segmentation and nonrigid registration while simultaneously estimating a probability tumor map, which estimates each voxel’s probability of belonging to a tumor. In this manner, we can interleave these processes and take proper advantage of the dependence among them. Using the proposed approach, it is easy to calculate the location changes of the lesion for diagnosis and assessment, thus we can precisely guide the interventional devices toward the tumor during radiation therapy. Escalated dosages can then be administered while maintaining or lowering normal tissue irradiation. There are a number of clinical treatment sites that will benefit from our MR-guided radiotherapy technology, especially when tumor regression and its effect on surrounding tissue can be significant. Here we focus on the treatment of cervical cancer as the key example application.

2 Description of the Model

2.1 A Unified Framework: Bayesian Formulation

Let I0 and Id be the planning day (day 0) and treatment day (day d) 3D MR images respectively, and S0 be the set of segmented planning day organs. A unified framework is developed using Bayesian analysis to calculate the most likely segmentation in treatment day fractions Sd, the deformation field between the planning day and treatment day data T0d, and the tumor map M at treatment day which associates with each pixel of the image its probability of belonging to a tumor.

Sd^,T0d^,M^=argmaxSd,T0d,M[p(Sd,T0d,M|I0,Id,S0)] (1)

This multiparameter MAP estimation problem in general is difficult to solve, however, we reformulate this problem such that it can be solved in two basic iterative computational stages using an iterative conditional mode (ICM) strategy[1,11]. With k indexing each iterative step, we have:

Sdk^=argmaxSdk[p(Sdk|T0dk(S0),Id,Mk)] (2)
T0dk+1^,Mk+1^=argmaxT0dk+1,M[p(T0dk+1,Mk+1|Sdk,S0,Id,I0)] (3)

These two equations represent the key problems we are addressing. Equation (2) estimates the segmentation of the important day d structure (Sdk). Equation (3) estimates the next iterative step of the mapping T0dk+1 between the day 0 and day d spaces, as well as the probability tumor detection map Mk+1.

2.2 Segmentation

In this paper, we use Eq. (2) to segment the normal or non-cancerous organs, hence we assume the segmentations Sdk here are independent of the tumor detection map M. Bayes rule is applied to Eq. (2):

Sdk^=argmaxSdk[p(Sdk|T0dk(S0),Id,Mk)]=argmaxSdk[p(Sdk|T0dk(S0),Id)]=argmaxSdk[p(Id|Sdk)·p(T0dk(S0)|Sdk)·p(Sd)] (4)

Shape Prior Information

The segmentation module here is similar to the previous work in [11]. Here we assume that the priors are stationary over the iterations, so we can drop the k index for that term only, i.e. p(Sdk)=p(Sd). Instead of using a point model to represent the object, we choose level set as the representation of the object to build a model for the shape prior[9], and then define the joint probability density function in Eq.(4).

Consider a training set of n rigidly aligned images, with M objects or structures in each image. The training data were generated from manual segmentation by a qualified clinician. Each object in the training set is embedded as the zero level set of a higher dimensional level set Ψ with negative distances inside and positive distances outside the object. Using the technique developed by Leventon et al. in [9], the mean (Ψ̄) and variance of the boundary of each object can be computed using principal component analysis (PCA). An estimate of the object shape Ψi can be represented by k principal components and a k-dimensional vector of coefficients (where k < n) αi

Ψ˜i=Ukαi+Ψ¯ (5)

where Uk is a matrix generated by PCA. Under the assumption of a Gaussian distribution of object represented by αi, we can compute the probability of a certain shape in Eq.(6), where Σ is the diagonal matrix of corresponding singular values.

p(Sd)=p(αi)=1(2π)k|Σk|exp[12αiTΣk1αi] (6)

Segmentation Likelihoods

Then we imposed a key assumption: the likelihood term is separable into two independent data-related likelihoods, requiring that the estimate of the structure at day d be close to: i.) the same structure segmented at day 0, but mapped to a new estimated position at day d by the current iterative mapping estimate Todk and ii.) the intensity-based feature information derived from the day d image [11].

In the Equation (4), p(T0dk(S0)|Sd) constrains the organ segmentation in day d to be adherent to the transformed day 0 organs by current mapping estimate Todk. Thus, we made the assumption about the probability density of day 0 segmentation likelihood term:

p(T0dk(S0)|Sdk)=1Z(x,y,z)exp[(ΨT0dk(S0)ΨSdk)2] (7)

In equation (4), p(Id|Sdk) indicates the probability of producing an image Id given Sdk. In three-dimensions, assuming gray level homogeneity within each object, we use the imaging model defined by Chan and Vese [2] in Eq.(8), where c1 and σ1 are the average and standard deviation of Id inside Sdk, c2 and σ2 are the average and standard deviation of Id outside Sdk.

p(Id|Sdk)=inside(Sdk)exp[(Id(x)c1)2/2σ12]·outsideexp[(Id(x)c2)2/2σ22] (8)

Energy Function

Combining(4),(6),(7),(8), we introduce the segmentation energy function Eseg defined by:

Eseg=lnp(Sdk|T0dk(S0),Id)=lnp(Id|Sdk)·p(T0dk(S0)|Sdk)·p(Sdk)λ1inside(Sdk)|Id(x)c1|2dx+λ2outside(Sdk)|Id(x)c2|2dx+γx|ΨT0dk(S0)ΨSdk|2dx+ωiαiTΣk1αi (9)

Notice that the MAP estimation of the objects in (4), Sdk^, is also the minimizer of the above energy functional Eseg. This minimization problem can be formulated and solved using the level set surface evolving method.

2.3 Registration and Tumor Detection

The second stage of the proposed strategy described above in Eq. (3) can be further developed using Bayes rule as follows in Eq.(10). As indicated in Section 2.2, the segmentations Sdk are independent of the tumor detection map M, and the priors are stationary over the iterations.

T0dk+1^,Mk+1^=argmaxT0dk+1,Mk+1[p(T0dk+1,Mk+1|Sdk,S0,Id,I0)]=argmaxT0dk+1,Mk+1[lnp(Sdk,S0|T0dk+1,Mk+1)+lnp(Id,I0|T0dk+1,Mk+1)+lnp(T0d,M)]=argmaxT0dk+1,Mk+1[lnp(Sdk,S0|T0dk+1)+lnp(Id,I0|T0dk+1,Mk+1)+lnp(T0d|M)+lnp(M)] (10)

The first term on the right hand side represents conditional likelihood related to mapping the segmented soft tissue structures at days 0 and d, and the second term registers the intensities of the images while simultaneously estimating the probability tumor map. The third and fourth terms represent prior assumptions and constraints on the overall nonrigid mapping and tumor map.

Segmented Organ Matching

As discussed in the segmentation section, each object is represented by the zero level set of a higher dimensional level set Ψ. Assuming the objects vary during the treatment process according to a Gaussian distribution, and given that the different organs can be mapped respectively, we further simplifies the organ matching term as:

lnp(Sdk,S0|T0dk+1)=obj=1Nlnp(Sdobjk,S0obj|T0dk+1)=obj=1Nxln12πσobjexp[(ΨT0dk+1(S0obj)ΨSdobjk)22σobj2]dx=obj=1Nωobjx[ΨT0dk+1(S0obj)ΨSdobjk]2dx (11)

with N represent the number of objects in each image, and ωobj are used to weight different organs. When minimized w.r.t. T0dk+1, the organ matching term ensures the transformed day 0 organs and the segmented day d organs align over the regions [11].

Intensity Matching and Tumor Probability Map

In order to define the likelihood p(Id,I0|T0dk+1,Mk+1), we assume conditional independence over the voxel locations x, as discussed in [18].

p(Id,I0|T0dk+1,Mk+1)=xp(Id(x),T0dk+1(I0(x))|Mk+1(x)) (12)

Different from the work in [11] which only uses a single similarity metric, here we model the probability of the pair p(Id(x),T0dk+1(I0(x))|Mk1(x)) to be dependent on the class of the pixel x. Each class is characterized by a probability distribution, denoted by pN for the normal tissue and pT for the tumor. Let Mk+1(x) be the tumor map which associates to x its probability to belong to a tumor. Thus, the probability distribution p(Id(x),T0dk+1(I0(x))|Mk1(x)) can be defined as a mixture of the two class distribution:

p(Id(x),T0dk+1(I0(x))|Mk1(x))=(1Mk+1(x))pN(Id(x),T0dk+1(I0(x)))+Mk+1(x)pT(Id(x),T0dk+1(I0(x))) (13)
Normal Tissue Class

Across the treatment procedure, the tumor experiences a regression process. When the tumor shrinks, some part of the tumor returns to the intensity level around the normal tissue. Thereafter, the normal tissue in treatment day MR has two origins. One is the normal tissue in planning day MR, and the other comes from the tumor in planning day and returns to normal due to the tumor regression. We choose two different probability for these two types. The histograms of normal cervical tissue and tumor are shown in Fig.1. From clinical research[6] and Fig.1, we can see that the intensities of tumor are generally much higher than those of normal cervical tissue.

Fig. 1.

Fig. 1

The histograms of normal cervix and tumor

Therefore, for the sake of simplicity, we characterize the normal tissue from the second type (used to be tumor but returned to normal later on) as areas with much lower intensity in treatment day MR[6]. We assume a voxel labeled as this part in day d MR can match any voxel in day 0 tumor with equal probability and use a uniform distribution. The remaining voxels are labeled as type 1 normal tissue (always normal since the planning day), which are modeled assuming a discrete Gaussian distribution across the corresponding voxel locations.

pN(Id(x),T0dk+1(I0(x)))={1/c,T0dk+1(I0(x))Id(x)>Δ12πσexp(|T0dk+1(I0(x))Id(x)|22σ2),otherwise (14)

where c is the number of voxels in the day 0 tumor, and Δ is the predetermined threshold used to differentiate the intensity of normal and tumor tissue.

Tumor Class

The definition of the tumor distribution is a difficult task. Similar to normal tissue, the tumor tissue in treatment day MR also has two origins. One is the tumor tissue in planning day MR, which represents the remaining tumor after the radiotherapy. We assume the voxel in the residual tumor can match any voxel in the initial tumor (day 0) with equal probability and use a uniform distribution. The other origin of tumor class in day d is from the normal tissue in planning MR, but the tissue grows into malignant tumor later on. This type is characterized with much higher intensity in day d image[6]. We assume that each normal voxel in the day 0 MR can turn into a tumor voxel with equal probability. Thus this type is also modeled using another uniform distribution, but with less probability, since the chance of this deterioration process is relatively small across the radiotherapy treatment.

pT(Id(x),T0dk+1(I0(x)))={1/(Vc),T0dk+1(I0(x))Id(x)<Δ1/c,otherwise (15)

where V is the total number of voxels in MR image, c is the number of voxels in the day 0 tumor, and it’s the same c as in Eq.(14).

Non-negative Jacobian

We want the transformation to be non-singular hence we would like to have a large penalty on negative Jacobian for normal tissues[8]. Meanwhile, we simulate the tumor regression process by constraining the determinant of the transformation’s Jacobian at M(x) between 0 and 1.

p(T0d(x)|M(x))=(1M(x))pN(T0d(x))+M(x)pT(T0d(x)) (16)

where pN and pT are calculated using continuous logistic functions: (ε = 0.1)

pN(T0d(x))=1Vc(11+exp(|J(T0d(x))|/ε))2 (17)
pT(T0d(x))=1c[(11+exp(|J(T0d(x))|ε))2(11+exp(1|J(T0d(x))|ε))2] (18)

V and c are defined the same as above, and they are used to normalize the probability density functions. These constraints (pN and pT ) penalize negative Jacobians and pT simulates the tumor regression process, thus reduce the probability of folding in the registration maps.

Tumor Map Prior

We assume that the tumor map arises from a Gibbs distribution. Many specific terms can be defined to describe the spatial configurations of different type of lesion. The previous detection approaches have a tendency to over-detect (to find more regions than the real ones)[13], hence in this paper, we use an energy restricting the total amount of abnormal pixels in the image.

p(M)=1ZeU(M)=1ZeΣxM(x) (19)

Energy Function

Combine the above equations, we introduce:

Eregdet(T,M)=lnp(T0d(k+1),M(k+1)|Sdk,S0,Id,I0)=obj=1Nωobjx[ΨT0dk+1(S0obj)ΨSdobjk]2dxxlnp(Id(x),T0dk+1(I0(x))|Mk+1(x))dxxlnp(T0d(x)|M(x))dx+xM(x) (20)

T0dk+1 and Mk+1 are estimated using a conjugate gradient method to minimize Eq.(20). Eqs (9) and (20) run alternatively until convergence, then the soft tissue segmentation as well as the nonrigid registration and tumor detection in day d benefit from each other and can be estimated simultaneously.

3 Results

We tested our proposed method on 30 sets of MR data acquired from five different patients undergoing EBRT for cervical cancer at Princess Margret Hospital. Each of the patient had six weekly 3D MR images. A GE EXCITE 1.5-T magnet with a torso coil was used in all cases. T2-weighted, fast spin echo images (voxel size 0.36mm × 0.36mm × 5mm, and image dimension 512 × 512 × 38) were acquired. The clinician performed the bias field correction so that the intensities could be directly compared, and the Δ in Eq.(14) and (15) was chosen to be 150. The MR images were resliced to be isotropic. We adopted a ”leave-one-out” test so that all the tested images were not in their training sets.

Segmentation Results

Fig.2(a) shows an example of MR data set with extracted 3D bladder (green) and uterus (purple) surfaces using the proposed method. Fig.2(b)(c) show the segmentation of bladder and uterus from axial view respectively, compared with manual segmentation.

Fig. 2.

Fig. 2

Segmentation Results. (a) An example of MR data set with automatically extracted bladder (green) and uterus (purple) surfaces. (b) Comparing bladder segmentations using the proposed algorithm (green) and manual segmentation (red). (c) Uterus results using the proposed algorithm (green) and manual segmentation (red).

We compare the experimental results from our algorithm with those obtained from the segmentation using traditional level set active shape models (without priors, and with shape priors only). To quantify the accuracy of our approach, we use mean absolute distance (MAD) and Hausdorff distance (HD) to evaluate the segmentation performance. While MAD is a global measure of the match of two surfaces, HD reflects their local similarities. Table 1 and 2 quantitatively analyze the segmentation results on bladder and uterus surfaces respectively. We observe that both MAD and HD decrease with the proposed method, which implies that our method has a consistent agreement with the manual segmentation.

Table 1.

Evaluation of Segmentation of Bladder

MAD(mm) HD(mm)
Without prior 7.61 ± 2.69 8.63 ± 3.18
With shape prior 3.28 ± 1.35 4.10 ± 1.41
Proposed method 0.82 ± 0.10 1.20 ± 0.15

Table 2.

Evaluation of Segmentation of Uterus

MAD(mm) HD(mm)
Without prior 10.57 ± 6.19 12.13 ± 5.88
With shape prior 6.11 ± 1.34 7.28 ± 1.21
Proposed method 1.26 ± 0.38 1.92 ± 0.75

Registration Results

The mapping performance of the proposed algorithm was also evaluated. For comparison, a conventional intensity-based FFD non rigid registration (NRR) [16] and rigid registration (RR) were performed on the same sets of real patient data. The corresponding difference images of the deformed bladder are shown in Fig.3.

Fig. 3.

Fig. 3

Example of different transformations on the registration for one patient. (a) Difference image of bladder after rigid registration. (b) After nonrigid registration. (c) After the proposed method.

Organ overlaps between the ground truth in day d and the transformed organs from day 0 were used as metrics to assess the quality of the registration (Table 3). We also tracked the registration error percentage between the ground truth in day d and the transformed organs from day 0 for bladder and uterus, as shown in Table 4. The registration error is represented as percentage of false positives (PFP), which calculates the percentage of a non-match being declared to be a match. Let ΩA and ΩB be two regions enclosed by the surfaces A and B respectively, PFP is defined as follows:

PFP=Volume(ΩB)Volume(ΩAΩB)Volume(ΩA).

Table 3.

Evaluation of Registration: Organ Overlaps (%)

Bladder Uterus
Rigid Registration 65.76 ± 6.69 68.32 ± 5.18
Nonrigid Registration 80.34 ± 2.39 77.28 ± 3.84
Proposed Method 90.58 ± 1.95 87.80 ± 2.24

Table 4.

Evaluation of Registration Error: PFP (%)

Bladder Uterus
Rigid Registration 33.57 ± 5.34 30.92 ± 5.22
Nonrigid Registration 19.21 ± 3.41 21.78 ± 4.03
Proposed Method 9.36 ± 1.40 12.06 ± 1.96

From the Table 3 and Table 4, we found that the RR performed the poorest out of all the registrations algorithms, while the proposed method significantly outperformed the NRR at aligning segmented organs.

Detection Results

We compare the tumor images obtained using our method with the manual detection performed by a clinician. For the proposed method, we set a threshold for the tumor probability map M. Fig.4(b) shows the detection rely entirely on the histogram in Fig.1 alone, which can not provide a meaningful result. Fig.4(c)–(h) provide the comparison between the proposed method and the detection by an expert. The tumor binary images are obtained with the probability threshold 0.7, which presents all the voxels that have have an over 70% chance to be in the tumor. From our experiments, we have found that the detection results are not sensitive to the threshold. The thresholds between 0.5 and 0.85 give almost the same detection results.

Fig. 4.

Fig. 4

Detection of cervical tumor for a patient. (a)3D surface of the detected tumor using out method. (b) Detection using the intensity information according to the histogram in Fig.1. (c)–(h): Comparison. Top: Tumor outlined manually by a clinician. Bottom: Detection results using our unified algorithm. Results from the first, third, and fifth week during the treatment are shown here.

From the experiments, we find that the tumor shape has a strong influence on the performance of the detection results. As a global trend, algorithms show more accurate detection on well-defined (e.g. Fig.4(c)(d)) than on ill-defined masses (e.g. Fig.4(e)(f)). Meanwhile, the tumor size has a weak influence on the detection performance. There is not a specific size where the algorithm performs poorly, i.e. the algorithm is not sensitive to the tumor size, as shown in Fig.4.

The detections of the algorithm are considered true positives if they overlapped with a ground truth (manual detection), and false positives otherwise. Free-response receiver operating characteristic (FROC) curves [13] are produced as validation of the algorithm. For performing the FROC analysis, a connected component algorithm is run to group neighboring voxels as a single detection. The algorithm achieved 90% sensitivity at 6.0 false positives per case. In comparison, the algorithm from [15] gave only 74% sensitivity at the same false positive rate. The corresponding FROC curves are shown in Fig.5. The proposed method shows better specificity and achieves a higher overall maximum sensitivity.

Fig. 5.

Fig. 5

FROC curves, compared with the approach in [15]

Fig. 6 shows a representative slice from the planning day MRI with overlaid five treatment day tumor detection contours. It can be seen that the detected tumor appears in the same slice and the same location in the registered serial treatment images, and tumor regression process is clearlly visible from the contours. Using the proposed technique, we can easily segment the organs of interest, estimate the mapping between day 0 and day d, and calculate the location changes of the tumor for diagnosis and assessment, thus we can precisely guide the interventional devices toward the lesion during image guided therapy.

Fig. 6.

Fig. 6

Mapping the detected tumor contours to planning day MRI

4 Conclusion

In this paper, a unified framework for simultaneous segmentation, nonrigid registration and tumor detection had been presented. In the segmentation process, the surfaces evolve according to the constraints from deformed contours and image gray level information as well as prior information. The constrained nonrigid registration part matches organs and intensity information respectively while taking the tumor detection into consideration. We define the intensity matching as a mixture of two distributions which describe statistically image gray-level variations for both pixel classes (i.e. tumor class and normal tissue class). These mixture distributions are weighted by the tumor detection map which assigns to each voxel its probability of abnormality. We also constraint the determinant of the transformation’s Jacobian, which guarantees the transformation to be smooth and simulates the tumor regression process. In the future, we plan to develop a system that incorporates a physical tumor regression model.

Footnotes

*

This work is supported by NIH/NIBIB Grant R01EB002164.

Contributor Information

Chao Lu, Email: chao.lu@yale.edu.

Sudhakar Chelikani, Email: sudhakar.chelikani@yale.edu.

James S. Duncan, Email: james.duncan@yale.edu.

References

  • 1.Besag J. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society. Series B (Methodological) 1986;48(3):259–302. [Google Scholar]
  • 2.Chan TF, Vese LA. Active contours without edges. IEEE Transactions on Image Processing. 2001;10(2):266–277. doi: 10.1109/83.902291. [DOI] [PubMed] [Google Scholar]
  • 3.Chelikani S, Purushothaman K, Knisely J, Chen Z, Nath R, Bansal R, Duncan JS. A gradient feature weighted minimax algorithm for registration of multiple portal images to 3DCT volumes in prostate radiotherapy. Int. J. Radiation Oncology Biol. Phys. 2006;65(2):535–547. doi: 10.1016/j.ijrobp.2005.12.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Greene WH, Chelikani S, Purushothaman K, Chen Z, Papademetris X, Staib LH, Duncan JS. Constrained non-rigid registration for use in imageguided adaptive radiotherapy. Medical Image Analysis. 2009;13(5):809–817. doi: 10.1016/j.media.2009.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Hachama M, Desolneux A, Richard F. Combining registration and abnormality detection in mammography. In: Pluim JPW, Likar B, Gerritsen FA, editors. WBIR 2006. LNCS. vol. 4057. Heidelberg: Springer; 2006. pp. 178–185. [Google Scholar]
  • 6.Hamm B, Forstner R. General MR Appearance. 1st edn. Heidelberg: Springer; 2007. MRI and CT of the Female Pelvis, chap. 3.2.1; p. 139. [Google Scholar]
  • 7.Jaffray DA, Carlone M, Menard C, Breen S. Image-guided radiation therapy: Emergence of MR-guided radiation treatment (MRgRT) systems. Medical Imaging 2010: Physics of Medical Imaging. 2010;vol. 7622:1–12. [Google Scholar]
  • 8.Joshi A, Leahy R, Toga A, Shattuck D. A framework for brain registration via simultaneous surface and volume flow. In: Prince JL, Pham DL, Myers KJ, editors. IPMI 2009. LNCS. vol. 5636. Heidelberg: Springer; 2009. pp. 576–588. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Leventon M, Grimson W, Faugeras O. Statistical shape influence in geodesic active contours. 2000 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2000;vol. 1:316–323. [Google Scholar]
  • 10.Lu C, Chelikani S, Papademetris X, Staib L, Duncan J. Constrained nonrigid registration using lagrange multipliers for application in prostate radiotherapy; 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2010. Jun, pp. 133–138. [Google Scholar]
  • 11.Lu C, Chelikani S, Chen Z, Papademetris X, Staib LH, Duncan JS. Integrated segmentation and nonrigid registration for application in prostate imageguided radiotherapy. In: Jiang T, Navab N, Pluim JPW, Viergever MA, editors. MICCAI 2010. LNCS. vol. 6361. Heidelberg: Springer; 2010. pp. 53–60. [DOI] [PubMed] [Google Scholar]
  • 12.Nag S, Chao C, Martinez A, Thomadsen B. The american brachytherapy society recommendations for low-dose-rate brachytherapy for carcinoma of the cervix. Int. J. Radiation Oncology Biology Physics. 2002;52(1):33–48. doi: 10.1016/s0360-3016(01)01755-2. [DOI] [PubMed] [Google Scholar]
  • 13.Oliver A, Freixenet J, Marti J, Perez E, Pont J, Denton ERE, Zwiggelaar R. A review of automatic mass detection and segmentation in mammographic images. Medical Image Analysis. 2010;14(2):87–110. doi: 10.1016/j.media.2009.12.005. [DOI] [PubMed] [Google Scholar]
  • 14.Pohl KM, Fisher J, Levitt J, Shenton M, Kikinis R, Grimson W, Wells W. A unifying approach to registration, segmentation, and intensity correction. In: Duncan JS, Gerig G, editors. MICCAI 2005. LNCS. vol. 3749. Heidelberg: Springer; 2005. pp. 310–318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Richard F. A new approach for the registration of images with inconsistent differences. 2004 International Conference on Pattern Recognition (ICPR) 2004 Aug;vol. 4:649–652. [Google Scholar]
  • 16.Rueckert D, Sonoda LI, Hayes C, Hill DLG, Leach MO, Hawkes DJ. Nonrigid registration using free-form deformations: Application to breast MR images. IEEE Transactions on Medical Imaging. 1999;18(8):712–721. doi: 10.1109/42.796284. [DOI] [PubMed] [Google Scholar]
  • 17.Yezzi A, Zollei L, Kapur T. A variational framework for integrating segmentation and registration through active contours. Medical Image Analysis. 2003;7(2):171–185. doi: 10.1016/s1361-8415(03)00004-5. [DOI] [PubMed] [Google Scholar]
  • 18.Zhang J, Rangarajan A. Bayesian multimodality non-rigid image registration via conditional density estimation. In: Taylor CJ, Noble JA, editors. IPMI 2003. LNCS. vol. 2732. Heidelberg: Springer; 2003. pp. 499–511 . [DOI] [PubMed] [Google Scholar]

RESOURCES