Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Oct 8.
Published in final edited form as: J Eval Clin Pract. 2015 Jun 17;21(5):900–910. doi: 10.1111/jep.12397

Automated Calculation of Ptosis on Lateral Clinical Photographs

Juhun Lee 1,*, Edward Kim 2, Gregory P Reece 3, Melissa A Crosby 4, Elisabeth K Beahm 3, Mia K Markey 5
PMCID: PMC5055840  NIHMSID: NIHMS818906  PMID: 26083280

Abstract

Rationale

The goal is to fully automate the calculation of a breast ptosis measure from clinical photographs through automatic localization of fiducial points relevant to the measure.

Methods

68 women (97 clinical photographs) who underwent or were scheduled for breast reconstruction were included. The photographs were divided into a development set (N = 49) and an evaluation set (N = 48). The breast ptosis measure is obtained automatically from distances between three fiducial points: the nipple, the lowest visible point of a breast (LVP), and the lateral terminus of the inframammary fold (LT). The nipple is localized using the YIQ color space to highlight the contrast between the areola and the surrounding breast skin. The areola is localized using its shape, location, and high Q component intensity. The breast contour is estimated using Dijkstra’s shortest path algorithm on the gradient of the photograph in grayscale. The lowest point of the estimated contour is set as the LVP. To locate the anatomically subtle LT, the location of patient’s axilla is used as a reference.

Results

The algorithm’s efficacy was evaluated by comparing manual and automated localizations of the fiducial points. The average nipple diameter was used as a cut-off to define success. The algorithm showed 90%, 91%, and 83% accuracy for locating the nipple, LVP, and LT in the evaluation set, respectively.

Conclusion

This study presents a new automated algorithm that may facilitate the quantification of breast ptosis from lateral views of patients’ photographs.

Keywords: Ptosis, Automated Detection, Digital Photographs, Nipple, Lateral Terminus, Breast Cancer

1. Introduction

Breast cancer is one of the most common cancer types among women who reside in the United States, with over 231,840 new cases of invasive breast cancer expected in 2015 [1]. Due to improved early detection techniques and adjuvant and neo-adjuvant treatments, however, the death rate caused by breast cancer has decreased significantly since 1990 [1,2]. Therefore, there is an increasing emphasis on restoration of breast cancer survivors’ quality of life, and breast reconstruction is an important part of cancer treatment for many patients to help them regain a high quality of life.

Breast reconstructive surgery encompasses a range of surgeries. Although her surgeon can help her to narrow down her reconstructive choices, the final choice is up to the breast cancer survivor. Quantitative and objective measures of breast morphology that can relate patient and surgical variables to aesthetic outcomes will help breast cancer survivors understand the appearance changes that could result from different reconstructive strategies. Understanding the likely outcomes of their reconstructive options will help breast cancer survivors make informed decisions about breast reconstruction.

The overall appearance of the breasts can be described in terms of characteristics such as shape, symmetry, and ptosis. Ptosis is a clinical term used to define the extent to which the nipple is lower than the inframammary fold, and it is considered to be an important aesthetic property of the breast [35].

Currently, surgeons assess ptosis subjectively by viewing patients’ breasts directly or in photographs. Subjective assessment of ptosis has low inter- and intra-observer agreement [6,7]. Therefore, a reliable quantitative, objective ptosis measure is necessary in order to relate patient and surgical variables to the visible outcomes of breast reconstruction. One approach to creating such a measure is based on distances between anatomical landmarks of the breast as identified on clinical photographs.

Previously, we developed an approach [3] to quantify ptosis using a ratio of the absolute vertical distances between the nipple, the lateral terminus of the inframammary fold (the visible point where the inframammary fold meets the lateral chest wall), and the lowest visible point of the breast on the patient’s body in both the lateral and oblique views. For these ptosis measures, values closer to 0 indicate much ptosis, whereas values closer to 1 indicate little ptosis. When these new measures were calculated based on manually localized fiducial points, there was a good agreement with subjective ratings by surgeons using an accepted scale.

However, the manual annotation of fiducial points is cumbersome, time-consuming, and prone to the inherent variability of human observers. For these reasons, it would be valuable to have a quick and accurate method for automatically locating fiducial points on clinical photographs.

The goal of this study was to fully automate the calculation of breast ptosis on lateral clinical photographs through automatic localization of the nipple, the lateral terminus of the inframammary fold (LT), and the lowest visible point of the breast (LVP). The locations of automatically identified fiducial points were compared to the corresponding manually localized points to evaluate the accuracy of the algorithm. Similarly, the value of the ptosis measure computed from the automatically located fiducial points was compared to the value obtained from the manually located points.

2. Materials & Methods

2.1. Overview

The methods for the automatic localization of the nipple (Section 2.5), the LT, and the LVP (Section 2.6) were designed based on a development set of clinical photographs (Section 2.2). The objective ptosis measure [3] was calculated based on the locations of these fiducial points (Section 2.7). The inframammary fold refers to a curvilinear structure formed where the breast mound meets the chest wall inferiorly, and it is usually hidden behind the breast mound [3,8]. The LT is the end point of the inframammary fold that can be found in lateral photographs (Figure 1). However, the LT is an anatomically subtle landmark and it is sometimes not visible for women with small breasts. Thus, we approximated its location as the point of intersection of the anterior axilla line and the inframammary fold. All of the image processing techniques and calculations were performed using MATLAB® (The Mathworks, Natick, MA). The flowchart of the overall algorithm is depicted in Figure 2.

Figure 1.

Figure 1

This figure shows an example image labeled with the key fiducial points used for this study.

Figure 2.

Figure 2

The overall flowchart of the algorithms.

2.2. Dataset

The studyset consists of women who underwent or were scheduled for breast reconstruction surgery from May 2008 to September 2013 in the Center for Reconstruction at The University of Texas MD Anderson Cancer Center. We excluded women who had any surgical interventions within 3 months from the date they were recruited, as the evaluation of breast ptosis is usually delayed until bruising and swelling are resolved [8]. Informed consent was obtained from patients following institutional review board (IRB) approval.

Digital clinical photographs (N = 150) of 106 patients were reviewed and 68 patients’ clinical photographs (N = 97) with sufficient quality were included in this study. Two cameras were used for this study, which were Nikon E8400 and Canon EOS REBEL T1i. We used the following criteria to control the variability in quality of the photographs from the two different cameras: 1) appropriate frame of view of patient’s torso (from pubis bone to below chin), 2) no pre-surgical marking made by surgeons, 3) wearing photo undergarment only, and 4) no photographic errors in focus or lighting. All patients’ photographs were taken against a solid, blue background without extra lighting. If one of the patient’s breasts was missing due to a mastectomy, only the image of the contralateral breast was evaluated.

We divided the patients that met our criteria (N = 68) into a development set and an evaluation set. Photographs (N = 49) of 36 patients were used as a development set to design the algorithm. The photographs of the remaining 32 patients (N = 48) were used as an evaluation set to evaluate the efficacy of the algorithm. Table 1 provides detailed information about the dataset including demographic summary statistics and the types of reconstructive surgery represented.

Table 1.

This table summarizes age, body mass index (BMI), and race/ethnicity information for the study population. Moreover, this table shows the summary statistics about the Renault’s ptosis grade, the shape of nipple, and the type of previous breast surgery for the study dataset.

Development Test
Sample Size 36 32

Summary Statistics Agemean (years) 50.3 48.9
Agerange (years) 31 – 74 30 – 65
BMImean 27.6 27.8
BMIrange 20 – 46 18 – 40

Race/Ethnicity White/Non Hispanic 28 23
White/Hispanic 6 5
Black/Non Hispanic 1 2
Asian/Non Hispanic 1 1

Total number of Breast 49 48

Ptosis grade Grade 0 28 14
Grade 1 13 17
Grade 2 7 10
Grade 3 1 7

Nipple shape Everted 45 43
Flat 4 5
Inverted 0 0

Healthy or preoperative breasts 36 41

Postoperative breasts Total number 13 7
Breast Reduction 5 0
Breast Augmentation 1 3
Breast Augmentation with Mastopexy 2 0
TE/Implant reconstruction 4 2
TRAM reconstruction 1 2

2.3. Preprocessing

Since each photograph is taken against a standard blue background (Figure 3.A), the background region of each image can be easily identified and removed by searching for pixels with high blue signal (blue > red & green). The algorithm automatically masks out the top 20% and bottom 30% of the patient in the image. Then, the algorithm masks out the left 45% of the patient’s torso for the right lateral view image to minimize the chances of false positive localization (Figure 1). Similarly, the algorithm masks out the right 45% of the patient’s torso area for the left lateral view image. This method is reliable since a standard pose is used for clinical photographs [9].

Figure 3.

Figure 3

A: An example of an original image used for this study. B: The Q color space data in the YIQ image after preprocessing is shown. C: The algorithm selected the candidate with the highest weight based on its location, Q channel value, and the shape as an areola. The centroid of the candidate was selected as the location of the nipple. D: The second image shows the region of interest (ROI). The last image shows the resulting rescaled cost map. E: The algorithm first estimates the location of LVP searching the minimum cost path from the nipple to the Lower arm point. The vertically lowest point within the first 40% path from the nipple is set as the LVP. Two end points and the resulting LVP are marked in red color. The LT is located using the Dijkstra’s shortest path from the LVP to the Upper arm point. The algorithm considered only the rectangular portion of the cost map where it starts from the LVP to the Upper arm point (unshaded area). Two end points and the resulting LT are marked in red and blue color. F: This figure shows the final response of Gabor filters followed by thresholding. The algorithm selected the longest edge line closest to the nipple was set as the patient’s arm (the blue arrow indicates such edge line). G: The estimated lower breast contour with three fiducial points (yellow).

2.4. Nipple localization

We previously introduced a successful nipple localization method for Anterior-Posterior (AP) view clinical photographs [10]. The method uses the Q component of the YIQ (Luma In-phase Quadrature) color space of clinical photographs based on the observation that the areola region exhibits a high value in the Q color space. Our new nipple localization method for lateral photographs also makes use of this property. After the preprocessing, the RGB image is converted into a YIQ image in order to extract the Q color space data (Figure 3.B) for further processing.

To locate a nipple, the algorithm first searches for areolas within the Q image. As women have different sizes of areolas and there can be anatomical noise (e.g., moles) in the image, an adaptive thresholding technique was applied on the Q image to find candidate areolas. For most women in the development set, the areola was readily distinguished from anatomical noise (e.g., moles) by its size in the thresholded Q image. Therefore, we set the starting threshold value of the algorithm as the maximum Q channel intensity for each patient image. The algorithm decreases the threshold value until it finds candidates that are of the size that is typical for areolae. We used the area of the patient’s torso after the masking as a reference to evaluate the size of candidate areola candidates. Candidates that were smaller than 5% but larger than 0.2% of the reference area were considered as areolae; the size of all areolas in the development set was within these cutoff values. Since areola regions are typically circular in shape, the image is eroded using a ‘square’ structural element with 1.5% of the width of patient’s torso, and dilated using a ‘disc’ structural element with 2.5% of the width of patient’s torso.

Up to this point in the algorithm, multiple candidate areolae have been identified (Figure 3.C). To select the nipple among the candidates, we construct a weight map from the location, the maximum Q intensity value, and the shape of each candidate areola,

WTotal(Candidate)=WLocation(Candidate)·WShape(Candidate)·WIntensity(Candidate),

where the algorithm selects the candidate with the maximum weight as the areola. The centroid of the selected candidate is taken as the nipple (red dot in Figure 3.C).

In most cases, the nipple is located at the right outline (edge) of the patient’s torso for right lateral images and vice versa (e.g., Figure 1 and Figure 3.A). Therefore, the algorithm removes the candidate areolae that are far (more than 10% of the width of the patient’s torso) from the outline (edge) of the patient’s torso. After that, the algorithm assigns a high weight to the remaining candidate areolae based on how close they are to the end of the patient’s torso. Thus, the weight for the location is given as

WLocation(Candidate)=exp(2·max(xcoordinatesofcandidate)max(xcoordinatesoftorso)+3).

Moreover, as the areola usually exhibits a high Q channel intensity, the ratio of the maximum Q intensity value of each candidate to that of the entire image is assigned to each candidate as its weight. The weight for the intensity is, therefore,

WIntensity(Candidate)=max(QChannelvaluesofcandidate)max(QChannelvaluesoftorso).

In addition, principal component analysis (PCA) was applied to each candidate and the ratio of the second eigenvalue to the first eigenvalue was used to evaluate the shape of the candidate. Since the typical shape of an areola in the lateral view is oval, the ratio of eigenvalues for an areola will be larger than that of most non-areolae. Thus, the weight function for the shape is given as

WShape(Candidate)=arctan(2ndeigenvalue1steigenvalue+1).

2.5. Localization of the Lowest Visible Point of the breast and the Lateral Terminus of the Inframammary Fold

The preprocessed image is converted to grayscale before subsequent processing to localize the LVP and the LT. These fiducial points are usually on the lower contour of the breast and they can be easily extracted from the outline of the lower breast contour once it is identified. Therefore, the algorithm in this section focuses on estimating the lower breast contour.

The algorithm in this section was inspired by the study of Cardoso and Cardoso [11,12]. In their study, the authors estimated the location of the contour of breasts in the anterior-posterior (AP) view photograph using a combination of the Sobel gradient operator and Dijkstra’s shortest path algorithm to trace the path between two manually identified points. We modified their method to automatically estimate the location of the lower contour of the breast in a lateral view photograph.

The first step of the algorithm is to select two end points from the breast contour in a lateral view image. We chose the nipple and the arm point, which is the point that we defined and located on the patient’s arm at the same height of the nipple, as our two end points (Figure 1).

Consider the right lateral photograph of a patient. To locate the arm point in the given photograph, we first need to delineate the outline of patient’s arm. To highlight the outline of the patient’s arm, we filtered the photographs with Gabor filter banks with orientations 0°, 15°, 30°, and 45°. All Gabor filter responses were summed up and the longest edge line close to the nipple was set as the patient’s arm (blue arrow in Figure 3.F). On the selected edge line, we chose the point at the same vertical height of the nipple as the other end point for the algorithm. In addition, we chose the end point closest to the nipple as the patient’s anterior axillary point and drew the patient’s anterior axilla line from it (Figure 1 and Figure 3.F). We used the anterior axilla line as a reference for approximating the location of the LT.

The next step of the algorithm is to obtain a cost map. We obtained the cost map using the following equation

cost(z)=0.15·exp(0.0208·(255-z))+1.85,

where z is the gradient image of the grayscale image after the preprocessing. The choice of the other coefficients in the above function is based on the study of Cardoso and Cardoso [11,12].

However, the lower breast contour in the lateral view image is not pronounced enough in the cost map (Figure 3.E). To prevent possible failure of the algorithm, we set a region of interest (ROI) where the breast contour is usually found and rescaled the costs in that region. From the development set, we found that all patients’ lateral breast contours fall into the area over the rectangle with the ratio of the height to the width of 0.8, where its width was set as the distance between the two end points. The ROI starts at the point where it is 20% of the ROI width higher than the nipple, to include the breast contour of ptotic breasts (shaded area in the second image of Figure 3.E). We chose such a rectangle as the ROI and rescaled the cost values in the ROI as follows

costnew(z)=cost(z)·exp(α·(cost(z))max(cost)-α).

The coefficient α was experimentally adjusted from the development set; the final value was 10.

However, in the case of a severely ptotic breast, the LT is almost always above both of the end points. This could cause the contour estimation algorithm to fail to pass through both the LT and the LVP. We prevent this problem by dividing the task of the breast contour estimation two steps: 1) locating the LVP, 2) and locating the LT using the location of the resulting LVP.

In the first step, we define the lower arm point (Figure 1) where it is vertically lowered from the arm point about a distance equivalent to 20% of the patient’s torso width (i.e., 20% of the width of unshaded area in Figure 1). Then Dijkstra’s shortest path algorithm was applied to find the shortest cost path between the nipple and the vertically lowered arm point. From the resulting path, we discarded the last 60% of the path from the nipple since there is a little possibility that it contains the lowest visible point on its path. After that, we selected the vertically lowest point of the remaining path as the LVP. The first image of Figure 3.F depicts the result of the first step.

In the second step, we set the resulting LVP as the starting point of Dijkstra’s shortest path algorithm. As mentioned earlier, the LT for a severely ptotic breast is located higher than the original two end points. Therefore, we define the upper arm point (Figure 1) where it is vertically lifted from the arm point about a distance equivalent to 10% of the patient’s torso width (i.e., 10% of the width of unshaded area in Figure 1), in order to ensure that the shortest path passes pass the LT. After applying the Dijkstra’s shortest path with the new end points, we approximated the location of the LT as the intersection of the anterior axilla line (Figure 1) and the estimated path/contour. It is possible that the LT is located either at the anterior axilla line or in front of or behind the anterior axilla line. However, the effect of this horizontal error is minimal as the objective ptosis measure uses only the vertical difference between fiducial points. The second image of Figure 3.E depicts the result of the second step.

2.6. Ptosis Calculation

The degree of ptosis is measured as described in our previous study [3]. In the lateral view, the breast under consideration is the one closest to the viewer. The vertical level of the LVP (v) is the reference point used to compare the vertical levels of the nipple (n) and the LT (i). The vertical difference between v and n represents how far above the inframammary fold the nipple lies, and the vertical difference between v and i represents the position of the LVP in relation to the LT. A ratio of these differences determines the degree of ptosis and is calculated as P=n-vi-v. If the nipple is higher than the lateral terminus, then there is no ptosis and the value is set to 1 by default.

2.7. Evaluation Strategy

Since our algorithm uses the approximated location of LT instead of true LT, we first investigated how this approximation affects the degree of ptosis described in the previous section. To do so, one non-clinical observer (J.L.) under the guidance of an experienced surgeon (G.P.R.) manually located the true LT and approximated LT as well as the nipple and LVP for each image in the development set and the evaluation set. To annotate the approximated LT manually, the observer followed the same criteria as the algorithm; he marked the intersection of the axilla line and the breast contour. In our evaluation, we treated the observer as the optimal algorithm for locating all fiducial points. It has been shown that humans are reliable at locating many fiducial points on the face as well as the breast [3,13]. We used the Spearman’s correlation analysis and the Intraclass Correlation Coefficient (ICC) [14] (two random effect model) to assess the agreement between the ptosis measure using the true LT and that using the approximated LT.

To evaluate the efficacy of the algorithm, we assigned the real distance (in mm) to the unit pixel using the tape measure on the blue background wall of the image. It was found that the unit pixel distance is 0.51 mm. We used the averaged diameter of the nipple, which is 1.2 cm [15], as a cut-off for the success of the algorithm. The other two fiducial points are anatomically less obvious than the nipple and, therefore, our previous study [3] demonstrated greater variability in human annotation of these points as compared to the variability in marking the location of the nipple. Moreover, there is no anatomical reference data like the nipple to use as a cut-off value. Thus, we used the same averaged nipple diameter as a cut-off for the other two fiducial points. Since the objective ptosis measure is purely based on the vertical level differences between fiducial points, this study applied the above cut-off value to the vertical error between the nipple locations annotated by the algorithm and the observer.

3. Results

3.1. Ptosis Comparisons

For this analysis, we used the entire set instead of analyzing the development set and evaluation set separately. Figure 4 showed a statistically significant correlation between the ptosis measure using the true LT and that using the approximated LT (rho = 0.65, p-value < 0.0001). The ICC value between the above two cases was 0.64, which indicates good agreement [16] between the two measurements. This result shows that the approximated LT can effectively replace the true LT for computing the objective ptosis measure.

Figure 4.

Figure 4

This figure depicts the scatter plots (the ptosis measure using the true LT vs the ptosis measure using the approximated LT) for our dataset. It shows a statistically significant correlation (rho = 0.65, p-value < 0.0001) between two ptosis measure values.

3.2. Algorithm accuracy

In the case of nipple localization, the vertical errors for both the development and evaluation sets are less than 1 cm (Figure 5.A–B and Table 2) for most cases. Using the cut-off value we defined in section 2.7, the algorithm showed 86% accuracy (42 out of 49) for development set and 90% accuracy (43 out of 48) for the evaluation set.

Figure 5.

Figure 5

A and B The vertical and horizontal error for locating the nipple, LVP, and LT of the images in the development set and evaluation set, respectively.

Table 2.

This table summarizes the Mean, Standard deviation, Maximum of the vertical (y-direction) and horizontal (x-direction) error, and Success rate for the development and evaluation set.

Mean (mm) Std (mm) Max (mm) # of Cases < 12 mm Success Rate
Development Nippley 9.1 28.9 199.6 42/49 86%
LVPy 4.4 10.2 69.9 45/49 92%
LTy 8.1 10.1 59.8 40/49 82%

Nipplex 3.5 3.5 17.8 N/A N/A
LVPx 6.8 7.1 30.4
LTx 4.0 4.2 21.7

Evaluation Nippley 4.8 6.9 36.3 43/48 90%
LVPy 6.1 11.8 61.4 44/48 92%
LTy 9.7 10.5 50.3 40/48 83%

Nipplex 5.6 9.4 61.1 N/A N/A
LVPx 7.1 7.5 28.7
LTx 5.2 3.8 16.7

Since the localizations of the LVP and LT are based on the result of the previous stage, we reprocessed the cases in which the nipple was clearly misplaced with the correct nipple location (made by the human observer). By doing so, we can report the performance of our algorithm for annotating the LVP and LT on the full dataset.

The averaged vertical errors of LVP and LT for both the development and evaluation are also less than 1 cm (Figure 5.B&E and Table 2). Using the same cut-off value as the nipple, the accuracy for locating LVP and LT are given as 92% (45 out of 49) and 82% (40 out of 49) for the development set and 92% (44 out of 48) and 83% (40 out of 48) for the evaluation set, respectively. Since the LT is a subtle anatomical structure, the higher error in localizing the LT relative to that of localizing the other two fiducial points is expected. In addition, our prior study found that human observers are more variable (both intra- and inter-observer variability) in locating the LT than the other two fiducial points [3].

4. Discussion

The main objective of this study was to develop a fully automatic algorithm for calculating a measure of breast ptosis from lateral clinical photographs. The algorithm requires the automatic localization of the nipple, the LVP, and the LT on lateral view clinical photographs. This study used the approximated LT instead of pin-pointing the true LT since LT is anatomically subtle structure. The objective ptosis measure calculated using the approximated LT showed good agreement relative to that obtained using the true LT. The algorithm showed 90%, 91%, and 83% accuracy for the localization of the nipple, the LVP, and the LT, respectively, for the evaluation set.

For most cases in which the algorithm failed to correctly localize the nipple (Figure 6.D–E), the algorithm was able to find a candidate areola that is located inside of the true areola. However, the algorithm failed to include the portion of the areola with the nipple. There are two reasons for these failures: 1) early stopping of the adaptive thresholding technique before the areola candidate with the nipple is formed and 2) the final weight of the areola candidate without the nipple is higher than that of areola candidate with the nipple. Other failures (Figure 6.B–C) were due to a narrow shadow on the outline of the patient’s torso, especially under the areola area. The shadow resulted in high Q channel intensity values with the enough size that can make the adaptive thresholding technique stop before the true areola candidate was created. As a consequence, the nipple localization algorithm misjudged the candidate that resulted from the shadow as being an areola candidate. The last failure case (Figure 6.A) was due the erroneous areola candidate created near the outline of the torso below the umbilicus. This areola candidate was assigned higher location weight than the true areola since it is located at the most lateral portion of the torso.

Figure 6.

Figure 6

This figure shows examples of failure cases of the proposed algorithm for nipple localization. A, B, and D are failure cases in the development set, and C and E are failure cases in the evaluation set. The red dot indicates the fiducial points located by the proposed algorithm and the green dot indicates the manually located fiducial points.

Since LVP and LT are extracted from the breast contour, we discuss the cases on which the algorithm fails to correctly detect the breast contour. There were 5 failure cases (Figure 7.B and D–E) due to the fact that the lower breast contour was less prominent than the edge created by the shadow in the middle of the breast mound. This shadow is typically observed when there is no light source below the body. Due to this type of failure, a total of 5 LT and 5 LVP were clearly mislocated. For 3 out of 5 failure cases (Figure 7.B and D), the study patients were in the Tissue Expander (TE) phase of TE/Implant reconstruction. A TE is used to create a loose breast soft tissue before it is exchanged with the implant. Therefore, the lower part of the breast mound with TE is usually in high tension. For this reason, the lower breast contour of the reconstructed breast in TE phase is less prominent than the aforementioned shadow. The other 2 failure cases (Figure 7.E) are due to a patient with small breasts. In this case, the gradient of the lower breast contour is not strong enough to attract the algorithm to follow the breast contour. However, such errors could be avoided in the future by applying equal lighting to both the upper and lower breast mounds to prevent a shadow in the middle of the breast mound.

Figure 7.

Figure 7

This figure shows examples of failure cases of the proposed algorithm for LT and LVP. The red contour indicates the estimated breast contour. The yellow dots indicate the fiducial points located by the proposed algorithm. The green dots indicate the manually located fiducial points.

There were 2 failures in locating the lower breast contour due to the strong edge created by a surgical scar (Figure 7.A) and a mark on the lateral part of the body created by the side of brassier (Figure 7.C). Since these scars/marks on the lateral part of the body are on the path of the breast contour, and the associated cost for those are lower than the true breast contour, the algorithm failed to follow the true breast contour. Due to this type of failure, a total of 2 LTs are clearly misplaced. The mark created by the brassier will go away in less than half an hour and, therefore, this kind of failure can be easily avoided in the future use of the algorithm.

However, a surgical scar, especially either close or along the line of the breast contour, can be a limitation of the proposed algorithm. There were a total of 14 cases in which the surgical scar is clearly visible in the image (8 out of 49 for the development set, 6 out of 48 for evaluation set). Most surgical scars (10 out of 14) in our dataset are located in the middle of the breast mound, which is away from the lower breast contour and, therefore, the breast contour estimation algorithm could remove the strong changes in the image gradient from those scars from its consideration. The study patient in Figure 7.A underwent breast reduction surgery (mastopexy) to better match the size of the contralateral breast which was undergoing the TE/Implant reconstruction. The surgical scars of mastopexy are usually located near the breast contour. Therefore, there is a high probability that the contour estimation algorithm will follow the surgical scar instead of the breast contour.

An alternate approach would be needed for subjects who have a surgical scar near the lower breast contour. One possibility is to use manually annotated fiducial points. Alternatively, the color differences between the scar and its surrounding area could be used in developing an automated algorithm specifically for cases with surgical scars. From the images with visible scars in this data set, we found that the absolute difference between the red and green color channels of the scar area is higher (at least 25% of the maximum difference) than that of the surrounding area. Therefore, the red-green color difference in the ROI could be used to automatically identify subjects with surgical scars for separate analysis.

The algorithm showed a higher error rate for locating LT than the other two fiducial points. Improving the accuracy of locating LT is an obvious future research area. However, the accuracy of locating the LT, as well as the other two points, can be dramatically improved if the photographic conditions are well standardized. Even though we removed very poor quality images, the data set still contains images for which there was uneven lighting on the subject and variations in patient positioning. Such photographic variations which many errors in automated image analysis. For the purpose of documenting surgical outcomes in plastic surgery, the standardization of lighting, consistent distances between camera and subject, proper patient positioning, and the standardization of camera position (landscape or portrait view) are important factors [17]. Thus, for the proposed algorithm, correct positioning of the study patient, lighting on the lower part of the breast, and a controlled background color are particularly important factors to be standardized.

An additional limitation of this study is the modest number of people of color in our dataset. Future studies with patients with a wider variety of skin colors would be beneficial for validating the broad applicability of our algorithm.

The clinical application of this paper is automating the process of quantifying the objective measure of breast ptosis for breast cancer survivors considering breast reconstruction surgery. With additional automated algorithms for quantifying additional aspects of breast morphology (e.g., symmetry), our algorithm may facilitate the process of quantifying aesthetic outcomes from different breast reconstruction options. Such quantified information about breast morphology after reconstruction surgery will allow us to relate surgical and patient variables such as age and BMI to breast reconstruction outcomes. Ultimately, this may help both breast cancer survivors and their surgeons to make informed decisions about breast reconstruction.

In conclusion, this study presents a new automated algorithm for computing an objective measure of breast ptosis from lateral clinical photographs. The method is based on automatic localization of three fiducial points: the nipple, the LVP, and the LT. Our analysis showed that this new algorithm can be used to facilitate the process of quantifying breast ptosis.

Acknowledgments

This study was supported in part by grant RSGPB-09-157-01-CPPB from the American Cancer Society and grant R01CA143190 from the National Institutes of Health. The sponsors have no involvement in this study. The authors wish to recognize former and current MD Anderson Plastic Surgeons for their support and/or contribution of patients to this series: Drs. Charles E. Butler, David W. Chang, Donald P. Baumann, Geoffrey L. Robb, Jesse C. Selber, Justin M. Sacks, Mark T. Villa, Patrick B. Garvey, Pierre M. Chevray, Roman Skoracki, Scott D. Oates, and Steven J. Kronowitz. We also acknowledge Kimberly A. Primm, June Weston, and D. Samantha Picraux for their efforts in data collection. We wish to thank Francis Carter for his supporting on the management of the data. Unfortunately, Elisabeth Beahm, MD passed away before the completion of this manuscript. She was a very valuable member of our research team and will be truly missed.

References

  • 1.Cancer facts and figures 2015. American Cancer Society; 2015. [Google Scholar]
  • 2.Zweifel-Schlatter M, Darhouse N, Roblin P, Ross D, Zweifel M, Farhadi J. Immediate microvascular breast reconstruction after neoadjuvant chemotherapy: complication rates and effect on start of adjuvant treatment. Ann Surg Oncol. 2010 Nov;17(11):2945–50. doi: 10.1245/s10434-010-1195-9. [DOI] [PubMed] [Google Scholar]
  • 3.Kim MS, Reece GP, Beahm EK, Miller MJ, Atkinson EN, Markey MK. Objective assessment of aesthetic outcomes of breast cancer treatment: measuring ptosis from clinical photographs. Comput Biol Med. 2007 Jan;37(1):49–59. doi: 10.1016/j.compbiomed.2005.10.007. [DOI] [PubMed] [Google Scholar]
  • 4.Regnault P. Breast ptosis. Definition and treatment. Clin Plast Surg. 1976 Apr;3(2):193–203. [PubMed] [Google Scholar]
  • 5.Rinker B, Veneracion M, Walsh CP. The Effect of Breastfeeding on Breast Aesthetics. Aesthetic Surgery Journal. 2008 Sep 1;28(5):534–7. doi: 10.1016/j.asj.2008.07.004. [DOI] [PubMed] [Google Scholar]
  • 6.Christie DRH, O’Brien MY, Christie JA, Kron T, Ferguson SA, Hamilton CS, Denham JW. A comparison of methods of cosmetic assessment in breast conservation treatment. The Breast. 1996 Oct;5(5):358–67. [Google Scholar]
  • 7.Lowery JC, Wilkins EG, Kuzon WM, Davis JA. Evaluations of aesthetic results in breast reconstruction: an analysis of reliability. Ann Plast Surg. 1996 Jun;36(6):601–606. doi: 10.1097/00000637-199606000-00007. discussion 607. [DOI] [PubMed] [Google Scholar]
  • 8.Bostwick J. Plastic and Reconstructive Breast Surgery. St. Louis, Mo: Quality Medical Pub; 1990. p. 1300. [Google Scholar]
  • 9.Photographic Standards in Plastic Surgery. Plastic Surgery Educational Foundation; 2006. [Google Scholar]
  • 10.Dabeer M, Kim E, Reece GP, Merchant F, Crosby MA, Beahm EK, Markey MK. Automated calculation of symmetry measure on clinical photographs. J Eval Clin Pract. 2011 Dec;17(6):1129–36. doi: 10.1111/j.1365-2753.2010.01477.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Cardoso JS, Cardoso MJ. Breast Contour Detection for the Aesthetic Evaluation of Breast Cancer Conservative Treatment. In: Kurzynski PM, Puchala DE, Wozniak DM, Zolnierek DA, editors. Computer Recognition Systems 2 [Internet] Springer; Berlin Heidelberg: 2007. pp. 518–25. [cited 2014 Jul 4] Available from: http://link.springer.com/chapter/10.1007/978-3-540-75175-5_65. [Google Scholar]
  • 12.Cardoso JS, Cardoso MJ. Towards an intelligent medical system for the aesthetic evaluation of breast cancer conservative treatment. Artif Intell Med. 2007 Jun;40(2):115–26. doi: 10.1016/j.artmed.2007.02.007. [DOI] [PubMed] [Google Scholar]
  • 13.Shi J, Samal A, Marx D. How effective are landmarks and their geometry for face recognition? Computer Vision and Image Understanding. 2006 May;102(2):117–33. [Google Scholar]
  • 14.McGraw KO, PS Forming inferences about some intraclass correlation coefficients. Psychological Methods. 1996;1(1):30–46. [Google Scholar]
  • 15.Rusby JE, Brachtel EF, Michaelson JS, Koerner FC, Smith BL. Breast duct anatomy in the human nipple: three-dimensional patterns and clinical implications. Breast Cancer Res Treat. 2007 Dec;106(2):171–9. doi: 10.1007/s10549-006-9487-2. [DOI] [PubMed] [Google Scholar]
  • 16.Rosner BA. Fundamentals of Biostatistics. Cengage Learning. 2006:908. [Google Scholar]
  • 17.Ellenbogen R, Jankauskas S, Collini FJ. Achieving standardized photographs in aesthetic surgery. Plast Reconstr Surg. 1990 Nov;86(5):955–61. doi: 10.1097/00006534-199011000-00019. [DOI] [PubMed] [Google Scholar]

RESOURCES