Skip to main content
Medical Physics logoLink to Medical Physics
. 2013 Mar 8;40(4):042301. doi: 10.1118/1.4793255

Automated chest wall line detection for whole-breast segmentation in sagittal breast MR images

Shandong Wu 1,a), Susan P Weinstein 1, Emily F Conant 1, Mitchell D Schnall 1, Despina Kontos 1
PMCID: PMC3606236  PMID: 23556914

Abstract

Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Computerized analysis is increasingly used to quantify breast MRI features in applications such as computer-aided lesion detection and fibroglandular tissue estimation for breast cancer risk assessment. Automated segmentation of the whole-breast as an organ from the other parts imaged is an important step in aiding lesion localization and fibroglandular tissue quantification. For this task, identifying the chest wall line (CWL) is most challenging due to image contrast variations, intensity discontinuity, and bias field.

Methods: In this work, the authors develop and validate a fully automated image processing algorithm for accurate delineation of the CWL in sagittal breast MRI. The CWL detection is based on an integrated scheme of edge extraction and CWL candidate evaluation. The edge extraction consists of applying edge-enhancing filters and an edge linking algorithm. Increased accuracy is achieved by the synergistic use of multiple image inputs for edge extraction, where multiple CWL candidates are evaluated by the dynamic time warping algorithm coupled with the construction of a CWL reference. Their method is quantitatively validated by a dataset of 60 3D bilateral sagittal breast MRI scans (in total 3360 2D MR slices) that span the full American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) breast density range. Agreement with manual segmentation obtained by an experienced breast imaging radiologist is assessed by both volumetric and boundary-based metrics, including four quantitative measures.

Results: In terms of breast volume agreement with manual segmentation, the overlay percentage expressed by the Dice's similarity coefficient is 95.0% and the difference percentage is 10.1%. More specifically, for the segmentation accuracy of the CWL boundary, the CWL overlay percentage is 92.7% and averaged deviation distance is 2.3 mm. Their method requires ∼4.5 min for segmenting each 3D breast MRI scan (56 slices) in comparison to ∼35 min required for manual segmentation. Further analysis indicates that the segmentation performance of their method is relatively stable across the different BI-RADS density categories and breast volume, and also robust with respect to a varying range of the major parameters of the algorithm.

Conclusions: Their fully automated method achieves high segmentation accuracy in a time-efficient manner. It could support large scale quantitative breast MRI analysis and holds the potential to become integrated into the clinical workflow for breast cancer clinical applications in the future.

Keywords: magnetic resonance imaging (MRI), breast, segmentation, chest wall line, edge extraction

INTRODUCTION

Breast magnetic resonance imaging (MRI) has emerged as an effective modality for the clinical management of breast cancer in screening,1 diagnosis,2 staging,3, 4 and assessment of treatment.5 In addition to lesion detection and characterization in diagnostic tasks, screening studies suggest that the characteristics of the breast tissue as visualized in breast MRI are associated with breast cancer risk, such as the relative amount of fibroglandular tissue in the breast6, 7, 8 and the amount of background parenchymal enhancement9, 10, 11, 12 in dynamic contrast enhanced (DCE)-MRI. Computerized algorithms are increasingly used to quantify image features in various breast MRI applications, including lesion localization in computer-aided detection,13, 14 and fibroglandular tissue quantification for breast cancer risk estimation.7, 15 A critical and computationally challenging step for fully automated computerized breast MRI analysis is to separate the whole-breast as an organ from the other parts of the body in the image (Fig. 1; in our definition the pectoral muscle is not included as a part of the breast). For this task, the segmentation algorithm requires the identification of both the air-breast interface and the chest wall line (CWL), the latter of which is the most challenging part due to several reasons (Fig. 2). First, the breast tissue and the chest wall muscle surrounding the CWL could have similar intensity range rendering the precise localization of the CWL difficult. Second, although the CWL can be generally perceived as being visually continuous, it often presents with intensity discontinuities, due to fibroglandular tissue crossing the CWLs or attenuated signal of the chest wall muscle. This introduces additional ambiguities in searching for a complete CWL. When a high percentage of fibroglandular tissue is present, these two challenges intensify making the segmentation problem even more complex. In addition, the bias field in breast MR imaging, which arises from the imperfections of the image acquisition process,16 raises an additional challenge for the segmentation algorithm due to the intensity inhomogeneity present in the image.17

Figure 1.

Figure 1

The problem of breast segmentation in MR imaging. (a) A breast MRI slice. (b) The segmentation consists of identifying the air-breast interface (left-side contour) and CWL (right-side contour). (c) The final segmented breast.

Figure 2.

Figure 2

Examples of breast MR images elaborating on different challenging aspects of the CWL detection. In these images the fibroglandular (e.g., dense) tissue appears darker than fat in the breast. The relevant areas of concern are highlighted by rectangles. The intensity contrast around the CWLs is low in (a) and (b). CWL discontinuity and highly dense tissue are present in (b) and (c).

Previous studies have addressed this problem by adopting different approaches. Most commonly, the breast is segmented by using semiautomated user-assisted methods, in which the segmentation results can be subjective.7, 8, 15, 18, 19, 20 Few automated methods have been reported. From these, methods that rely on simple intensity operations such as thresholding21, 22 or the sign of gradients23 tend to fail for very low image contrast and very dense breasts. Model (or atlas)-based segmentation methods, such as oriented active shape models,24 statistical shape models,25 muscle-slab models,26 and breast atlases27 have shown good performance since they benefit from a prior learning of anatomical or statistical knowledge. However, the performance of this kind of methods inherently depends on a training dataset and the size of the training sample is critical in order to achieve a reasonable accuracy. In addition, many of the previously reported methods have not gone through a comprehensive validation, partially because they were primarily developed as a rough preprocessing step within the context of more general applications. In the work by Ertas et al.,21 only two 3D bilateral scans were used and 60 2D images (out of two 3D bilateral scans) were used by Wang et al.28 for experimental validation. Segmentation of the chest wall boundary was simplified to a coarse estimation to the closed breast profile in the work by Hayton et al.29 Moon et al.30 segmented the breast roughly by a rectangular bounding box. Only qualitative validation was performed in some of the relevant work.21, 29, 30 While such approaches could be reasonable for studies with small datasets, they may be limited for reliably processing large-scale datasets with large variation in breast density.

In this work, we develop and validate a fully automated breast CWL detection method for breast segmentation in sagittal view images. This work represents an extension of our previous preliminary study.31 Compared to our earlier work, in this paper we present a complete methodological framework with enhanced algorithmic modules as well as an extended validation by using a significantly larger dataset and additional validation metrics. In our method, the CWL is detected through edge enhancing and linking, while increased accuracy is achieved by the synergistic use of multiple image inputs in conjunction with the integrated CWL candidate evaluation based on the dynamic time warping algorithm. Our method is validated with a representative dataset of 60 3D bilateral sagittal breast MRI scans (56 slices per scan; 56 × 60 = 3360 2D slices in total) spanning the entire American College of Radiology (ACR) Breast Imaging-Reporting and Data System (BI-RADS) breast density range,32, 33 achieving a high segmentation accuracy in a time-efficient manner.

METHODS

Our method is implemented specifically for sagittal view breast MR images. Given a 3D breast MRI scan, we first identify the air-breast interface in each 2D slice. This is a relatively straightforward step for which we use previously validated preprocessing techniques including thresholding, image morphological opening, and contour extraction.31 Once the air-breast interface is identified, there are three main steps specifically for the chest wall line detection: (1) a general chest wall region is selected as the region of interest (ROI) for searching for the CWL; (2) An edge map is created for the ROI and a CWL candidate is extracted from this edge map; three image inputs of the ROI (i.e., the original image as well as two enhanced images by separately applying two nonlinear smoothing filters) are synergistically used to identify three CWL candidates; and (3) the final CWL segmentation is determined via the dynamic time warping algorithm in which a CWL reference is constructed and used to evaluate each of the three CWL candidates. These three steps are sequentially presented in Secs. 2A, 2B, 2C.

ROI selection: The general chest wall region

In sagittal MR imaging, CWLs tend to appear in a small posterior region of the breast in every MR slice of the 3D scan.34 We identify a narrowed region, the general chest wall region, as the ROI to constrain the search of CWL. For this purpose, we calculate two vertical separation lines, namely, the anterior and posterior separation lines, and the region between these two separation lines is considered as the general chest wall region (Fig. 3).

Figure 3.

Figure 3

An example of the ROI selection for searching for the CWL. (a) The left and right lines represent the anterior and posterior separation lines, respectively. (b) The selected ROI is defined as the general chest wall region.

The anterior separation line (left line in Fig. 3) is derived based on the boundary slices that visualize none or very little amount of breast tissue. Assuming that we have M slices in a breast MRI series and for each slice i, i ∈ [1, M], the area of the breast region circumscribed by the closed air-breast interface is denoted by Si, we first identify the boundary slices by examining the condition of the relative breast area ratio: arg i{[ρ=Si/ max i[1,M]Si]<ρ0}, where ρ0 represents a threshold. As the air-breast interface in boundary slices tend to be a roughly vertical straight line, the anterior separation line is calculated as a straight line in the vertical direction (i.e., Y axis; refer to the coordinate system in Fig. 3) whose location in the horizontal direction (i.e., X axis) is determined by 1Ii=1IXmi, where I denotes the number of the identified boundary slices and Xm represents the mean value of the air-breast interface's X coordinates. Likewise, the posterior separation line (right line in Fig. 3) is derived from the nonboundary slices (i.e., all breast slices excluding the identified boundary slices). For a nonboundary slice, denoting the X coordinates of the two end points of the air-breast interface by Xe1 and Xe2, respectively, the posterior separation line is calculated as a vertical straight line determined by the position of max iMI{ max {Xe1i,Xe2i}}. Having identified the anterior and posterior separation lines, the breast area between these two lines covers the general spanning scope of the CWL search space. Any breast area outside of the two separation lines is excluded from the subsequent processing in searching for the CWL, which significantly reduces possible distractions from complex anatomical breast structures, dense fibroglandular tissue, intensity inhomogeneities, and the imaged parts of the chest cavity.

CWL edge extraction

Anatomically, a CWL can be visually perceived as the longest vertically linked edge that is formed by tissue intensity contrast. Therefore, the main idea for CWL detection is to extract the edge that corresponds to the CWL from an edge map generated from the general chest wall region for each slice. While ideally the longest vertical edge in the edge map should reflect the CWL, extracting a single edge that corresponds to a complete and precise CWL is nontrivial, due to problems such as intensity inhomogeneity, low contrast, and edge discontinuities (i.e., see examples in Fig. 2). To overcome these problems, we adopt an integrated approach of edge enhancement (Sec. 2B1) and edge linking (Sec. 2B2) to increase the robustness of the CWL edge extraction.

CWL edge enhancement

Prior to generating the edge maps, we separately apply two nonlinear smoothing filters, an anisotropic diffusion filter35 and a bilateral filter36 to the general chest wall region of the original MR images. The two filters reduce noise and more importantly the edge features are enhanced by the filtering so that improved edge maps can be produced. The anisotropic diffusion filter encourages smoothing over points with low brightness variations (e.g., within a region) while smoothing across edges of high brightness variation is suppressed.35 The bilateral filter is a neighborhood filter that combines traditional low-pass filtering with range filtering in the intensity domain.36 It emphasizes neighboring points that are both spatially closer and similar in intensity range; hence, edges are preserved as intensity variations across edges are usually large.

For each MRI slice we generate three edge maps by using three different image inputs: the original image of the general chest wall region and the two images filtered by the anisotropic diffusion and bilateral filter, respectively (Fig. 4). The edge maps are implemented by calculating the Canny edges,37 for which we used the “edge” function in MATLAB (v. R2011b; Mathworks, Natick, MA) with default parameters. For the two smoothing filters, we used either the suggested parameters of the original algorithm implementation35, 36 or referred to previous studies.38, 39 Specifically, the iteration number and conduction coefficient was set to 30 and 20, respectively, for the anisotropic diffusion filter; for the bilateral filter, the radius of local patch is set to 5.

Figure 4.

Figure 4

Three representative examples of Canny edge maps and the extracted edges corresponding to the CWL from original images, the anisotropic diffusion and bilateral filtered images. The first column [(a), (d), (g)] shows the original MR image, the second column [(b), (e), (h)] shows the anisotropic diffusion filtered image, and the third column [(c), (f), (i)] shows the bilateral filtered image. Each row shows results for one of the three representative cases demonstrating the synergistic strengths of the three image inputs in finding a complete edge corresponding to the CWL; specifically, the second example shows the strength of the anisotropic diffusion filter, while the third shows the strength of the bilateral filter.

CWL edge linking

In an edge map, a complete CWL may correspond to several smaller discontinuous but spatially adjoining edges. As the filtering performed in the previous step reduces noisy edges, it results in a more clear delineation of the more predominant edges of major anatomical structures (e.g., the ones that are most likely to correspond to the CWL), therefore enabling the linking of those discontinuous edges to comprise a complete CWL. Referring to the perceptual grouping principles,40, 41 we design an edge linking algorithm to form a complete CWL by selecting and linking a set of adjoining edges. The workflow of the algorithm is described as follows. Given an edge map, the longest edge is extracted first. Then starting from this longest edge, the algorithm iteratively locates and accumulates an immediately adjoining edge until no more suitable edges are found based on two criteria: (1) the spatial proximity in terms of the distance between the terminal points of the current and the adjoining edge, and (2) the incline degree of the adjoining edge in terms of the slope of the fitted straight line for the adjoining edge. The maximal allowable spatial proximity and minimal slope is set to 12 voxels and 3, respectively. The final obtained accumulated edge is considered as a CWL candidate for a specific image type (i.e., original image, anisotropic diffusion filtered image, and bilateral filtered image). Figure 5 shows representative edge linking examples.

Figure 5.

Figure 5

Three examples demonstrating the edge linking algorithm on (a) an original image, (b) an anisotropic diffusion filtered image, and (c) a bilateral filtered image. Each example first shows the located intermittent partial edges and then the single-linked complete edge corresponding to the CWL.

CWL candidate evaluation

Once we obtain a CWL candidate from each of the three image inputs, we proceed to find the best CWL segmentation, for which we use a general reference shape of the CWL to evaluate the goodness of each CWL candidate. For this purpose, we first generate such a CWL reference and then perform a quantitative comparison between each of the three candidates and the constructed CWL reference to find the best CWL segmentation.

Specifically, a CWL reference is generated via averaging all the CWL candidates for each breast. Given a bilateral 3D breast MR scan, the left and right breasts are automatically separated according to their position data of the MR slices (i.e., using the SliceLocation field in the DICOM header). Let Ci1, Ci2, and Ci3 denote the three CWL candidates extracted from the original image, anisotropic diffusion filtered image, and bilateral filtered image, respectively, the CWL reference for each breast is produced in three steps (Fig. 6).

Figure 6.

Figure 6

Generation of the CWL reference. (a)–(c) The three CWL maps of the original, anisotropic diffusion, and bilateral filtered image inputs, respectively. Each CWL map represents the superimposed segmentation of the CWL candidates from the nonboundary slices for that breast. (d) The shared CWL map. (e) The CWL reference (solid curve). Note that the CWL map and the shared CWL map are 2D images and the CWL reference is a corresponding 2D curve.

First, for each type of the three image inputs, all the CWL candidates from the nonboundary slices (N) are projected to a single 2D space to create a CWL map [Figs. 6a, 6b, 6c]. The projection is implemented by taking the union operation so that three corresponding CWL maps are created (denoted by CM1, CM2, and CM3),

CM 1=i=1NCi1, CM 2=i=1NCi2, CM 3=i=1NCi3. (1)

Then, we generate a shared CWL map, CMs, by locating the intersected points across the three CWL maps [Fig. 6d],

CM s=j=13 CM j. (2)

Finally, we apply a median filter to the shared CWL map CMs to remove outlier points, followed by a multipoint averaging operation, in which iterating from the first to the last row of the shared CWL map, a centered point is calculated via averaging the locations of all the points on each row. The curve formed by the connected centered points gives an estimate of the average shape for the CWLs and is defined as a CWL reference [Fig. 6e].

Once the CWL reference is generated for each breast, the CWL candidates for each of the slices belonging to that breast are quantitatively compared with the CWL reference and the most similar candidate is selected as the final segmented CWL for that particular slice. Note that a linear pointwise comparison between a CWL candidate and a CWL reference is not direct since the CWLs may differ in length and distribution of points. To overcome this problem, we construct a nonlinear CWL comparison algorithm by employing the dynamic time warping algorithm.42

Briefly, dynamic programming is used in the dynamic time warping algorithm to find the alignment with a minimum cost/distance (i.e., maximum similarity) between two arbitrarily shaped CWLs. For a CWL candidate C = {[xcyc]u|u ∈ [1, U]} with length U and a CWL reference R = {[xryr]v|v ∈ [1, V]} with length V, we define the cost function d(u, v), which reflects the pointwise similarity between points Cu = [xcyc]u and Rv = [xryr]v, based on the normalized Euclidean distance,

d(u,v)=CuRvCu·Rv. (3)

In the algorithm of dynamic programming, the accumulated minimum cost [denoted by w(u, v)] of aligning up to points Cu and Rv is calculated based on the direct cost between Cu and Rv plus the minimum cost among its three neighbors,

w(u,v)=d(u,v)+min(w(u1,v1),w(u,v1),w(u1,v)). (4)

The final output of the dynamic time warping is the accumulated cost at w(U, V), which gives a quantitative distance measuring the overall similarity between the CWL candidate C and CWL reference R. By employing this algorithm, for each slice we calculate the dynamic time warping-based distances wi1, wi2, and wi3 for the three CWL candidates Ci1, Ci2, and Ci3 in their comparison to the corresponding CWL reference; then the CWL candidate with a minimum distance, i.e., min {wi1,wi2,wi3}, is taken as the final CWL segmentation (Fig. 7).

Figure 7.

Figure 7

CWL candidate (left-side curve of each plot) matching to the CWL reference (right-side curve of each plot) via the dynamic time warping algorithm. Dash lines depict nonlinear alignment of points. (a) CWL of the original image, w1 = 0.24. (b) CWL of the anisotropic diffusion filtered image, w2 = 0.10. (c) CWL of the bilateral filtered image, w3 = 0.06. The CWL candidate in (c) is the final CWL segmentation. In these plots the CWL reference is horizontally shifted 35 pixels from its actual position for better visualization of the alignment.

As a last step of our method, we refine the segmentation by examining the CWLs’ relative location in adjacent slices. In general, the CWLs of immediately adjacent image slices are expected to change their location by a small shift following the outline of the chest cavity through the breast. Following this, we measure the centroids of the CWLs in each slice and identify potential outlier CWLs that do not conform to this anatomical assumption. An outlier CWL is then refined using linear interpolation from the two nearest neighbor nonoutlier CWLs. After this refinement step, the air-breast interface and the final CWL segmentation are integrated to form a closed segmentation contour, from which a corresponding breast mask is generated to segment out the breast. Figure 8 shows the corresponding segmentation results for the three examples shown in Fig. 2.

Figure 8.

Figure 8

Final breast segmentation (closed contour) for the three examples shown in Fig. 2.

Validation metrics

The fully automated CWL segmentation is compared with the manual segmentation performed by an expert. The segmentation accuracy is measured by four quantitative metrics including both volumetric and boundary-based measures. As extensively used in the literature,23, 27 we evaluate the volume agreement of the automatically versus manually segmented breasts with the overlay percentage (OP) given by Dice's similarity coefficient (DSC) and the difference percentage (DP),

OP =ΘaΘm(Θa+Θm)/2×100, (5)
DP =ΘaΘm(Θa+Θm)/2×100, (6)

where Θa and Θm denote the total volume of the combined left and right breast of the automated and manual segmentation for each MRI scan. To determine the total volume we calculate the number of voxels of the segmented breasts and multiply by the voxel's unit volume.

As the volumetric metrics may not be sensitive to small variations of the CWL segmentation, especially for large breasts, we also consider boundary-based metrics focusing only on the segmented CWL to specifically assess the precision of the obtained CWL detection [Fig. 9c]. We use two boundary metrics, the CWL overlay percentage (COP) and averaged deviation distance (CDD),

COP = min (Ca,Cm) max (Ca,Cm)×100, (7)
CDD =i=PQCa(xi)Cm(xi)QP·μ,( unit : mm ), (8)

where Ca and Cm denote automated and manual CWL segmentations, respectively, and the operator ⌊ ⌋ denotes the calculation of CWL length in terms of the number of points/voxels. Because the CWLs of both the automated and the manual segmentation span primarily the vertical direction (Y axis; refer to the coordinate system in Fig. 9) of the image, COP indicates how well the two CWLs overlap vertically. CDD measures the averaged distance (unit: mm) of the automated CWL segmentation in the horizontal direction (X axis) from the manual segmentation, where P/Q denotes the index of the first/last point of the overlapping portion of the two CWLs in terms of their Y coordinates, and μ is the MR imaging parameter of the horizontal distance covered by each voxel (unit: mm/voxel) which is available in the DICOM header of the MR images.

Figure 9.

Figure 9

Examples of (a) manual segmentation contour and (b) automated segmentation contour. (c) The superimposition of the manual and automated CWL, including a zoomed-in local portion of the CWLs for better visualization.

EXPERIMENTS AND RESULTS

Dataset

Our method is validated with a representative set of 60 3D bilateral sagittal breast MRI scans (56 slices per scan; 56 × 60 = 3360 2D slices in total), randomly selected from cancer-unaffected women in our high-risk screening population,43 who have had T1-weighted, nonfat-suppressed breast MR imaging. The age of the women ranges from 26 to 63 years with an average of 44.5 years. Women were imaged prone in either a 1.5T scanner (GE LX echo speed, GE Health, Nutley, NJ, or Siemens Sonata, Siemens Medical Solutions, Malvern, PA) or a 3T scanner (Siemens Trio) using a dedicated surface breast coil array and a varying range of clinical imaging parameters; matrix size: 256 × 256 (except 192 × 192 for one case); slice thickness: 2.3–4 mm; slice spacing: 2.3–4 mm, field of view: 18–22 cm; flip angle: 15° or 20°. The cases were selected so that they span the full range (i.e., all four categories) of the ACR BI-RADS breast density categories (I: <25%; II: 25%–50%; III: 51%–75%; IV: >75%), where the BI-RADS assessment was performed visually by an experienced radiologist. In total, 17, 15, 14, and 14 cases for BI-RADS density categories I, II, III, and IV are available, respectively.

Our automated segmentation is compared with manually segmented results, which is considered here as our gold standard for validation. The manual segmentation was obtained by a board certified breast imaging radiologist (S.P.W., 15 years of experience) using the ITK-SNAP (Ref. 44) software. First, a 3D breast MRI scan is loaded to the ITK-SNAP platform, and then in each slice of the scan the radiologist uses the “Polygon Tool” and makes a sequence of left-clicks with the mouse along the air-breast interface boundary and the chest wall line to acquire a set of points; once finished, the ITK-SNAP automatically connects the points to form a closed breast contour [Fig. 9a] giving rise to a breast mask circumscribed by the contour. In order to accommodate the varying sizes of the breast in different slices/cases, the radiologist chooses a varying number of points using their discretion to obtain a relatively smooth fit to the breast contour. The generated breast mask is then stored.

Results

Based on the manual segmentation, the volume of the maximal breast size is 1764.7 cm3, the minimal breast is 188.4 cm3, and the mean volume is 743.4 cm3 over the 60 cases. The overall segmentation accuracy averaged over all 60 cases is shown in Table 1 (row 1). For the whole set of slices from all the 60 cases, 28.4%, 41.7%, and 29.9% of the CWLs are obtained from the original image, the anisotropic diffusion filtered image, and the bilateral filtered image, respectively. In addition, among all of the CWL segmentations identified by the dynamic time warping-based matching, only a small portion, i.e., 6.4%, needs further refinement by interpolation. As can be seen from the averaged segmentation accuracies with respect to the BI-RADS density categories (Table 1), OP and DP decrease as tissue density becomes higher. For COP, the highest accuracy is achieved for category III (93.7%) and the lowest for category IV (90.8%), and both categories II (93.6%) and III (93.7%) outperform categories I (92.7%) and IV (90.8%). For CDD, minimum error is observed for category II (1.8 mm), maximal error for category IV (3.4 mm), and the same errors for category I (2.1 mm) and category III (2.1 mm). The coefficient of variation (CV) of the accuracy across the four categories is 0.01, 0.09, 0.01, and 0.31 for OP, DP, COP, and CDD, respectively, indicating that the segmentation accuracy is relatively robust across the BI-RADS density categories. In Fig. 10, we show representative segmentation results for the four BI-RADS density categories where comparison between manual and automated segmentation is visualized. In general, we observed that the CWL segmentation is accurate except for certain slices that are close to boundary slices. We attribute this to potential signal attenuation in those slices. For the air-breast interface, segmentation errors mainly result from “wraparound” artifact residuals [e.g., visualized as “spots” in the top of the volume at the second row (e) of Fig. 10] and suboptimal cut-point selection for excluding the upper abdominal wall portion [e.g., shown as the peaks in the bottom of the volume at the third row (e) of Fig. 10].31

Table 1.

Overall segmentation accuracy and corresponding averaged accuracies for each of the 4 ACR BI-RADS density categories. Data format: mean (std).

  OP (%) DP (%) COP (%) CDD (mm)
Overall (60 cases) 95.0 (1.9) 10.1 (3.8) 92.7 (5.5) 2.3 (1.5)
BI-RADS density category I 95.4 (1.6) 9.3 (3.3) 92.7 (5.8) 2.1 (1.2)
  II 95.3 (1.5) 9.4 (3.0) 93.6 (5.5) 1.8 (1.1)
  III 94.7 (2.1) 10.6 (4.2) 93.7 (4.7) 2.1 (1.1)
  IV 94.4 (2.4) 11.2 (4.7) 90.8 (6.1) 3.4 (2.2)

Figure 10.

Figure 10

Representative results with 3D visualization for the manual and automated segmentation. Rows 1–4 show selected examples for each BI-RADS density category, with increasing fibroglandular tissue density. (a) Automated breast segmentation contour. (b) Manually segmented breast. (c) Manual segmentation volume. (d) Automated segmentation volume. (e) Union of the two volumes shown in (c) and (d).

We also assess the segmentation accuracy with respect to the total breast volume (Fig. 11). A larger breast volume leads to higher OP and lower DP. The R2 of the linear regression between volume size and OP, DP, COP, and CDD is 0.25, 0.25, 0.07, and 0.17, respectively, suggesting that the segmentation accuracy is not significantly accounted for by the variation in breast volume. We also estimate the breast volume correlation between left and right breasts (Fig. 12). This is due to the expectation of a strong bilateral correlation in breast volume for the same woman. The correlation coefficient (r) is 0.9996 for the manual segmentation and 0.9995 for the automated segmentation, indicating a very high agreement.

Figure 11.

Figure 11

Linear regression of segmentation performance versus breast volume: (a) OP, (b) DP, (c) COP, and (d) CDD. A larger breast volume leads to higher OP and lower DP. The R2 is 0.25, 0.25, 0.07, and 0.17 for OP, DP, COP, and CDD, respectively, suggesting that segmentation accuracy is not significantly accounted for by the variation of breast volume.

Figure 12.

Figure 12

Correlation of the volume between the segmented left and right breasts. The Pearson correlation coefficient (r) is 0.9996 for (a) the manual segmentation and 0.9995 for (b) the automated segmentation, indicating strong bilateral agreement.

The automated segmentation implemented using the MATLAB (v. R2011b; Mathworks, Natick, MA) software requires ∼4.5 min, running on a desktop PC (Intel Core i5 CPU 2.67 GHz, 3 GB RAM) to process a single bilateral breast MRI case (56 slices with inplane resolution of 256 × 256) compared to ∼35 min required for manual segmentation, indicating a considerable increase in time-efficiency.

To elaborate on the advantages brought by the synergistic use of multiple image inputs (i.e., original image, anisotropic diffusion, and bilateral filtered images), we further compare the segmentation performance of the joint use of the three image inputs versus the accuracy of the individual use of each single image input separately (Table 2). As indicated by the CV in segmentation accuracy (Table 2, bottom row), when using a single image input, a slight variation in accuracy is observed for OP (CV = 0.01), while there is considerable decrease for DP (CV = 0.17), COP (CV = 0.11), and CDD (CV = 0.29). Based on the segmentation performance comparison shown in Table 2, the synergistic use of multiple image inputs yields an overall improved accuracy than the individual use of any single image input.

Table 2.

Segmentation performance comparison of the proposed synergistic use of multiple image inputs with the individual use of each single input averaged over all the 60 cases. Data format: mean (std).

Image input OP (%) DP (%) COP (%) CDD (mm)
Synergistic use of the three inputs 95.0 (1.9) 10.1 (3.8) 92.7 (5.6) 2.3 (1.5)
Original image 95.3 (3.2) 12.9 (5.2) 72.1 (11.6) 3.1 (2.4)
Anisotropic diffusion filtered image 94.2 (3.1) 14.9 (6.6) 84.1 (9.0) 4.6 (2.7)
Bilateral filtered image 94.4 (3.1) 14.5 (6.5) 78.8 (10.3) 4.2 (2.7)
Coefficient of variation (rows 1–4) 0.01 0.17 0.11 0.29

Robustness analysis

We assess the robustness of our method as a function of six major algorithm parameters including the relative breast area ratio threshold (ρ0 defined in Sec. 2A), the three parameters for the two smoothing filters (i.e., the iteration number, conduction coefficient, and bilateral radius, all defined in Sec. 2B1), and the two parameters for the edge linking algorithm (i.e., the maximal spatial proximity and minimal slope, all defined in Sec. 2B2). We repeat our experiments by varying each parameter for a range of 4 values, while fixing the rest of the parameters at the default values determined for our method. For each of the four validation metrics, the CV is computed across the range of the varying parameters to assess the segmentation performance variation. Figure 13 shows the segmentation performance variation for four of the six parameters; for the rest of the two parameters there is essentially very little variation.

Figure 13.

Figure 13

Robustness analysis of the segmentation performance variation for the four validation metrics, (a) OP, (b) DP, (c) COP, and (d) CDD, with respect to the varying range of the four parameters: relative breast area ratio threshold, iteration number, conduction coefficient, and bilateral radius. The legend is shown on top of the figure for all of the 4 plots and the X axis ticks in each of the plots refer to the 4 parameter values (P1, P2, P3, and P4) shown in the corresponding legend items.

More specifically, when the threshold ρ0 varies from 0.2 to 0.5 (with 0.1 intervals), the corresponding OP, DP, COP, and CDD metrics vary, respectively, in the range of 93.1%–95.6% (CV = 0.01), 9.1%–13.2% (CV = 0.18), 89.3%–94.0% (CV = 0.02), and 1.7–3.9 mm (CV = 0.4); we found that when ρ0 is set to higher values (e.g., 0.4 or 0.5), improved accuracies are achieved, particularly for CDD (hence the higher CV of 0.4). We attribute this effect to the rationale that a higher value of ρ0 identifies and excludes more number of boundary slices which are difficult to accurately detect the chest wall line, while risking the loss of some of the identified slices that do have effective breast tissues. Hence, the determination of this parameter should be balanced. When the iteration number varies from 10 to 70 (with an interval of 20), the accuracies of the OP, DP, COP, and CDD vary, respectively, in the range of 94.7%–95.1% (CV < 0.01), 9.8%–10.5% (CV = 0.03), 90.9%–93.0% (CV = 0.01), and 2.3–2.8 mm (CV = 0.1). Similarly, when the conduction coefficient varies from 20 to 80 (also with an interval of 20), the corresponding accuracies vary in the range of 94.8%–95.0% (CV < 0.01), 10.1%–10.5% (CV = 0.02), 91.6%–93.0% (CV < 0.01), and 2.3–2.9 mm (CV = 0.1), respectively. When the bilateral radius varies from 3 to 9 (using an interval of 2), the performance accuracies vary in the range of 94.8%–95.1% (CV < 0.01), 9.9%–10.4% (CV = 0.02), 92.4%–92.7% (CV < 0.01), and 2.3–2.5 mm (CV = 0.03), respectively; these results show that our method is relatively stable for the parameters of the two smoothing filters. Finally, when the maximal spatial proximity and minimal slope vary from 6 to 12 (using an interval of 2) and from 2 to 5 (using an interval of 1), respectively, the accuracy variations are consistently very small (CV < 0.01) for all the four metrics (these results are not shown in the figure, for simplicity of presentation given this very small variation).

DISCUSSION

In this work, we present an integrated edge extraction scheme for fully automated CWL detection in sagittal breast MRI. The notion of segmentation via edge extraction enables us to significantly increase robustness by using edge-preserving filters to deal with low contrast, and utilizing an edge linking algorithm to deal with discontinuities. The synergistic use of the three image inputs outperforms the individual use of each single image input, showing that the integrated edge extraction and candidate evaluation scheme is effective in determining the best CWL segmentation. Our method is validated with a large and diverse set of breast MRI cases (60 bilateral 3D MRI scans, a total of 3360 2D MR slices) that reflect varying segmentation difficulty and clinical reality by spanning the entire range of the ACR BI-RADS breast tissue density categories. Our experiment results indicate that the segmentation accuracy is relatively stable across the range of BI-RADS density categories and the major parameters of our algorithm, and also relatively insensitive to the variation of breast volume.

In typical breast MR imaging protocols, the images are mostly acquired in the axial or sagittal plane.34 The comparison of breast segmentation methods designed on different breast MRI views may not be straightforward as different anatomical properties are visualized and, therefore, different assumptions may hold in segmentation. For example, in axial view-based methods, the locations of certain anatomical landmarks such as the aortic arch, thoracic spine, midsternum, and axilla are commonly used;19, 23, 26 however, these landmarks are not directly available in the sagittal view images. When the images are acquired in one specific view (e.g., sagittal), theoretically other view (e.g., axial) images can be derived via reslicing, enabling the generalization of segmentation methods across different view images. However, in practice the derived images may not always have a good resolution due to the nonisotropic scanning in breast MR imaging (e.g., for the dataset we used in this work the resolution of the derived axial view image is only 256×56), which potentially may lead to limited or clinically unacceptable segmentation. Therefore, given that most of the previously published related methods have been primarily developed for axial view breast MR images,19, 21, 22, 23, 26, 27, 28 it is not straightforward to directly compare our method with previous approaches.

Although not directly comparable, to put the segmentation performance of our approach in context, Gubern-Mérida et al.27 reported a mean DSC of 0.70 for the pectoral muscle segmentation for an atlas-based method. The volume agreement between automated and manual breast segmentation was 0.79 ± 0.09 in the work conducted by Giannini et al.23 An average error of 1.434 mm of the nearest distance between the segmented and manually annotated boundary surface was reported for two cases by Wang et al.28 In our method and experiments on 60 cases spanning the full BI-RADS density range, we achieved a mean DSC of 0.95 on the breast volume overlay, 10.1% on the volume difference, 92.7% on the overlap percentage of the CWL, and 2.3 mm on the averaged deviation distance, when comparing the automated and manual segmentation. In addition, quantitative comparison with several of the previous works21, 29, 30 is not directly feasible since no quantitative results on the breast segmentation accuracy were reported; yet, the qualitative assessment of the presented cases appears to be acceptable for the targeted tasks such as the contrast enhancement analysis29 and multimodality density estimation.30

A potential limitation of our method is that, in principal, it is not guaranteed to always result in an optimal CWL segmentation from the three image inputs (i.e., original image, anisotropic filtered image, and bilateral filtered image), as all three candidates may be suboptimal. However, in practice, we found that this theoretical limitation is mostly accommodated by the refinement step. As reported in Sec. 3B, while there is a considerable variation in breast size/volume (minimum: 188.4 cm3, maximum: 1764.7 cm3, mean: 743.4 cm3) for the 60 cases, our data set does not include less-representative breast MRI scans such as mastectomy cases and breasts after surgery. The reported breast segmentation performance is achieved without applying prior bias field correction. Our approach is expected to be less sensitive to intensity inhomogeneity as it is based on searching for edges that essentially reflect relative intensity variations. Future work will seek to test how the segmentation performance could vary after correction for bias field.

Within the 60 breast MRI scans used in our experiments, there is substantial variation in MR imaging parameters to reasonably reflect the range of typical clinical sagittal view breast MR imaging protocols, for the purpose of demonstrating feasibility of our method. Considering the encouraging results, it will be important to ultimately also have our method tested in additional independent data sets, to evaluate the generalizability in a broader range of MR imaging scanners, coils, and protocol parameters. It will also be an interesting extension to future work to also consider the effects of inter- and intraobserver variability in obtaining the manual ground truth segmentation. Finally, in this study cancer-unaffected MR scans are used for validation. For cancer cases, the distracting effect of the nuisance-edges coming from tumors could be similar with that of dense tissue, for which our algorithm appears to be relatively robust. In addition, in dealing with cancer cases, the initial selection of the narrower CWL search space (i.e., ROI) of our algorithm could also significantly reduce the interference of tumor tissues. Nevertheless, further validation with cancer-affected scans will also be included as part of our future work.

In typical clinical breast MRI scanning protocols, a sequential set of MRI series, such as T2-weighted fat-suppressed, T1-weighted nonfat-suppressed, fat-suppressed T1 pre- and postcontrast are acquired. Our method is developed to run specifically on the T1-weighted, nonfat-suppressed series. This series is particularly suitable for chest wall line detection because of the relative high intensity contrast between the chest wall and the adjacent breast tissue, since fat is not suppressed in the images; this series is also a common series that is almost always included in all typical clinical breast MR imaging protocols. Compared to the T1-weighted nonfat-suppressed series that we use for applying our method, it would be more challenging to detect the chest wall line directly on the images of the other series (such as T2, fat-suppressed precontrast and postcontrast) because of the relatively lower intensity contrast between the chest wall and the breast tissue. In order to obtain a corresponding breast segmentation in these series, our ultimate strategy would be to translate the breast segmentation masks obtained in the T1-weighted, nonfat-suppressed series to these other series by applying a prior registration step to align the series of the same scan, provided that the patient does not move significantly during the scanning. Future work will seek to also test this strategy for breast segmentation across different breast MR series.

CONCLUSIONS

We present a fully automated method for chest wall line detection for whole-breast segmentation in sagittal breast MR images. Our method is validated on a representative dataset that spans the full range of the ACR BI-RADS breast density categories. Our results show that our method achieves accurate, robust, and time-efficient segmentation. The automation of our method also enables reproducible results. Future work will investigate the ability of our method in relevant clinical applications, such as large patient population studies for breast cancer risk assessment through automated analysis of the fibroglandular tissue and background parenchymal enhancement.9

ACKNOWLEDGMENTS

This work was supported by the National Institutes of Health (NIH)/National Cancer Institute (NCI) 1R21CA155906-01A1 grant and the University of Pennsylvanian Institute for Translational Medicine and Therapeutics (ITMAT) Transdisciplinary Program in Translational Medicine and Therapeutics UL1RR024134 grant from the National Center for Research Resources. The content of this work is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Research Resources or the National Institutes of Health. The authors thank Ms. Kathleen Thomas, who was the research coordinator of the clinical trial from where the data originated, and Dr. Johnny Kuo for developing and maintaining the image database.

References

  1. Saslow D., Boetes C., Burke W., Harms S., Leach M. O., Lehman C. D., Morris E. A., Pisano E., Schnall M., Sener S., Smith R. A., Warner E., Yaffe M., Andrews K. S., and Russell C. A., “American Cancer Society guidelines for breast screening with MRI as an adjunct to mammography,” Ca-Cancer J. Clin. 57(2), 75–89 (2007). 10.3322/canjclin.57.2.75 [DOI] [PubMed] [Google Scholar]
  2. Morris E. A. and Liberman L., Breast MRI: Diagnosis and Intervention (Springer-Verlag, New York,2005). [Google Scholar]
  3. Bedrosian I., Mick R., Orel S. G., Schnall M., Reynolds C., Spitz F. R., Callans L. S., Buzby G. P., Rosato E. F., Fraker D. L., and Czerniecki B. J., “Changes in the surgical management of patients with breast carcinoma based on preoperative magnetic resonance imaging,” Cancer 98(3), 468–473 (2003). 10.1002/cncr.11490 [DOI] [PubMed] [Google Scholar]
  4. Lee J. M., Orel S. G., Czerniecki B. J., Solin L. J., and Schnall M. D., “MRI before reexcision surgery in patients with breast cancer,” Am. J. Roentgenol. 182(2), 473–480 (2004). 10.2214/ajr.182.2.1820473 [DOI] [PubMed] [Google Scholar]
  5. Partridge S. C., Gibbs J. E., Lu Y., Esserman L. J., Tripathy D., Wolverton D. S., Rugo H. S., Hwang E. S., Ewing C. A., and Hylton N. M., “MRI measurements of breast tumor volume predict response to neoadjuvant chemotherapy and recurrence-free survival,” Am. J. Roentgenol. 184(6), 1774–1781 (2005). 10.2214/ajr.184.6.01841774 [DOI] [PubMed] [Google Scholar]
  6. Engeland S. V., Snoeren P. R., Huisman H., Boetes C., and Karssemeijer N., “Volumetric breast density estimation from full-field digital mammograms,” IEEE Trans. Med. Imaging 25(3), 273–282 (2006). 10.1109/TMI.2005.862741 [DOI] [PubMed] [Google Scholar]
  7. Klifa C., Carballido-Gamio J., Wilmes L., Laprie A., Shepherd J., Gibbs J., Fan B., Noworolski S., and Hylton N., “Magnetic resonance imaging for secondary assessment of breast density in a high-risk cohort,” Magn. Reson. Imaging 28(1), 8–15 (2010). 10.1016/j.mri.2009.05.040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Kontos D., Xing Y., Bakic P. R., Conant E. F., and Maidment A. D. A., “A comparative study of volumetric breast density estimation in digital mammography and magnetic resonance imaging: Results from a high-risk population,” Proc. SPIE 7624, 762409-1–762409-9 (2010). 10.1117/12.845568 [DOI] [Google Scholar]
  9. King V., Brooks J. D., Bernstein J. L., Reiner A. S., Pike M. C., and Morris E. A., “Background parenchymal enhancement at breast MR imaging and breast cancer risk,” Radiology 260(1), 50–60 (2011). 10.1148/radiol.11102156 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Hambly N. M., Liberman L., Dershaw D. D., Brennan S., and Morris E. A., “Background parenchymal enhancement on baseline screening breast MRI: Impact on biopsy rate and short-interval follow-up,” Am. J. Roentgenol. 196(1), 218–224 (2011). 10.2214/AJR.10.4550 [DOI] [PubMed] [Google Scholar]
  11. Jansen S. A., Lin V. C., Giger M. L., Li H., Karczmar G. S., and Newstead G. M., “Normal parenchymal enhancement patterns in women undergoing MR screening of the breast,” Eur. Radiol. 21(7), 1374–1382 (2011). 10.1007/s00330-011-2080-z [DOI] [PubMed] [Google Scholar]
  12. Macura K. J., Ouwerkerk R., Jacobs M. A., and Bluemke D. A., “Patterns of enhancement on breast MR images: Interpretation and imaging pitfalls,” Radiographics 26, 1719–1734 (2006). 10.1148/rg.266065025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Comstock C., “Breast magnetic resonance imaging interpretation using computer-aided detection,” Semin. Roentgenol. 46(1), 76–85 (2011). 10.1053/j.ro.2010.09.002 [DOI] [PubMed] [Google Scholar]
  14. Muralidhar G. S., Bovik A. C., Sampat M. P., Whitman G. J., Haygood T. M., Stephens T. W., and Markey M. K., “Computer-aided diagnosis in breast magnetic resonance imaging,” Mt. Sinai J. Med. 78(2), 280–290 (2011). 10.1002/msj.20248 [DOI] [PubMed] [Google Scholar]
  15. Wei J., Chan H. P., Helvie M. A., Roubidoux M. A., Sahiner B., Hadjiiski L. M., Zhou C., Paquerault S., Chenevert T., and Goodsitt M. M., “Correlation between mammographic density and volumetric fibroglandular tissue estimated on breast MR images,” Med. Phys. 31(4), 933–942 (2004). 10.1118/1.1668512 [DOI] [PubMed] [Google Scholar]
  16. Vovk U., Pernus F., and Likar B., “A review of methods for correction of intensity inhomogeneity in MRI,” IEEE Trans. Med. Imaging 26(3), 405–421 (2007). 10.1109/TMI.2006.891486 [DOI] [PubMed] [Google Scholar]
  17. Sled J. G., Zijdenbos A. P., and Evans A. C., “A nonparametric method for automatic correction of intensity nonuniformity in MRI data,” IEEE Trans. Med. Imaging 17(1), 87–97 (1998). 10.1109/42.668698 [DOI] [PubMed] [Google Scholar]
  18. Boyd N., Martin L., Chavez S., Gunasekara A., Salleh A., Melnichouk O., Yaffe M., Friedenreich C., Minkin S., and Bronskill M., “Breast-tissue composition and other risk factors for breast cancer in young women: A cross-sectional study,” Lancet Oncol. 10(6), 569–580 (2009). 10.1016/S1470-2045(09)70078-6 [DOI] [PubMed] [Google Scholar]
  19. Nie K., Chen J. H., Chan S., Chau M. K., Yu H. J., Bahri S., Tseng T., Nalcioglu O., and Su M. Y., “Development of a quantitative method for analysis of breast density based on three-dimensional breast MRI,” Med. Phys. 35, 5253–5262 (2008). 10.1118/1.3002306 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Lee N., Rusinek H., Weinreb J., Chandra R., Toth H., Singer C., and Newstead G., “Fatty and fibroglandular tissue volumes in the breasts of women 20–83 years old: Comparison of x-ray mammography and computer-assisted MR imaging,” Am. J. Roentgenol. 168(2), 501–506 (1997). 10.2214/ajr.168.2.9016235 [DOI] [PubMed] [Google Scholar]
  21. Ertas G., Gulcur H. O., Osman O., Ucan O. N., Tunaci M., and Dursun M., “Breast MR segmentation and lesion detection with cellular neural networks and 3D template matching,” Comput. Biol. Med. 38, 116–126 (2008). 10.1016/j.compbiomed.2007.08.001 [DOI] [PubMed] [Google Scholar]
  22. Twellmann T., Lichte O., and Nattkemper T. W., “An adaptive tissue characterisation network for model-free visualisation of dynamic contrast-enhanced magnetic resonance image data,” IEEE Trans. Med. Imaging 24(10), 1256–1266 (2005). 10.1109/TMI.2005.854517 [DOI] [PubMed] [Google Scholar]
  23. Giannini V., Vignati A., Morra L., Persano D., Brizzi D., Carbonaro L., Bert A., Sardanelli F., and Regge D., “A fully automatic algorithm for segmentation of the breasts in DCE-MR images,” in Proceedings of the 32rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2010 (Institute of Electrical and Electronics Engineers, Buenos Aires, Argentina, August 31 – September 4, 2010). 10.1109/IEMBS.2010.5627191 [DOI] [PubMed]
  24. Liu J. and Udupa J. K., “Oriented active shape models,” IEEE Trans. Med. Imaging 28(4), 571–584 (2009). 10.1109/TMI.2008.2007820 [DOI] [PubMed] [Google Scholar]
  25. Cristina G. and Martel A. L., “Automatic model-based 3D segmentation of the breast in MRI,” Proc. SPIE 7962, 796215-1–796215-8 (2011). 10.1117/12.877712 [DOI] [Google Scholar]
  26. Yao J., Chen J., and Chow C., “Breast tumor analysis in dynamic contrast enhanced MRI using texture features and wavelet transform,” IEEE J. Sel. Top. Signal Process. 3(1), 94–100 (2009). 10.1109/JSTSP.2008.2011110 [DOI] [Google Scholar]
  27. Gubern-Mérida A., Kallenberg M., Martí R., and Karssemeijer N., “Multi-class probabilistic atlas-based segmentation method in breast MRI,” in Proceedings of the 5th Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA’11), Las Palmas de Gran Canaria, Spain (Springer-Verlag, Berlin, Heidelberg, 2011), pp. 660–667.
  28. Wang L., Filippatos K., Friman O., and Hahn H. K., “Fully automated segmentation of the pectoralis muscle boundary in breast MR images,” Proc. SPIE 7963, 796309-1–796309-8 (2011). 10.1117/12.877645 [DOI] [Google Scholar]
  29. Hayton P., Brady M., Tarassenko L., and Moore N., “Analysis of dynamic MR breast images using a model of contrast enhancement,” Med. Image Anal. 1(3), 207–224 (1997). 10.1016/S1361-8415(97)85011-6 [DOI] [PubMed] [Google Scholar]
  30. Moon W. K., Shen Y. W., Luo S. C., Huang C. S., Kuzucan A., and Chen J. H., “Comparative study of density analysis using automated whole breast ultrasound and MRI,” Med. Phys. 38(1), 382–389 (2011). 10.1118/1.3523617 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Wu S. D., Weinstein S. P., Conant E. F., Localio A. R., Schnall M. D., and Kontos D., “Fully automated chest wall line segmentation in breast MRI by using context information,” Proc. SPIE 8315, 831507-1–831507-9 (2012). 10.1117/12.911612 [DOI] [Google Scholar]
  32. American College of Radiology, Breast imaging reporting and data system (BI-RADS), www.arc.org: American College of Radiology, 2003.
  33. Molleran V. and Mahoney M. C., “The BI-RADS breast magnetic resonance imaging lexicon,” Magn. Reson. Imaging Clin. N. Am. 18(2), 171–185 (2010). 10.1016/j.mric.2010.02.001 [DOI] [PubMed] [Google Scholar]
  34. Weinstein S. P. and Rosen M., “Breast MR imaging: Current indications and advanced imaging techniques,” Radiol. Clin. N. Am. 48(5), 1013–1042 (2010). 10.1016/j.rcl.2010.06.011 [DOI] [PubMed] [Google Scholar]
  35. Perona P. and Malik J., “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990). 10.1109/34.56205 [DOI] [Google Scholar]
  36. Tomasi C. and Manduchi R., “Bilateral filtering for gray and color images,” in Proceedings of the Sixth International Conference on Computer Vision (ICCV-98), Bombay, India.
  37. Canny J., “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986). 10.1109/TPAMI.1986.4767851 [DOI] [PubMed] [Google Scholar]
  38. Weickert J., Anisotropic Diffusion in Image Processing (Teubner-Verlag, Stuttgart, Germany, 1998) (available URL: http://www.mia.uni-saarland.de/weickert/Info/book_coverpage.html). [Google Scholar]
  39. Weeratunga S. K. and Kamath C., “Comparison of PDE-based non-linear anistropic diffusion techniques for image denoising,” SPIE Proceedings Vol. 5014 Image Processing: Algorithms and Systems II, edited by Dougherty E. R., Astola J. T., and Egiazarian K. O., pp. 201–212 (2003). 10.1117/12.477744 [DOI]
  40. Estrada F. J. and Jepson A. D., “Perceptual grouping for contour extraction,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR 2004), edited by Kittler J., Petrou M., and Nixon M. S. (IEEE Computer Science Press, Cambridge, UK, 23–26 August 2004), pp. 32–35.
  41. Saund E., “Finding perceptually closed paths in sketches and drawings,” IEEE Trans. Pattern Anal. Mach. Intell. 25(4), 475–491 (2003). 10.1109/TPAMI.2003.1190573 [DOI] [Google Scholar]
  42. Wu S. D. and Li Y. F., “Flexible signature descriptions for adaptive motion trajectory representation, perception and recognition,” Pattern Recog. 42, 194–214 (2009). 10.1016/j.patcog.2008.06.023 [DOI] [Google Scholar]
  43. Weinstein S. P., Localio A. R., Conant E. F., Rosen M., Thomas K. M., and Schnall M. D., “Multimodality screening of high-risk women: a prospective cohort study,” J. Clin. Oncol. 27(36), 6124–6128 (2009). 10.1200/JCO.2009.24.4277 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Yushkevich P. A., Piven J., Hazlett H. C., Smith R. G., Ho S., Gee J. C., and Gerig G., “User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability,” Neuroimage 31(3), 1116–1128 (2006). 10.1016/j.neuroimage.2006.01.015 [DOI] [PubMed] [Google Scholar]

Articles from Medical Physics are provided here courtesy of American Association of Physicists in Medicine

RESOURCES