Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Mar 17.
Published in final edited form as: IEEE Trans Med Imaging. 2016 Dec 6;36(3):734–744. doi: 10.1109/TMI.2016.2636188

Automatic Segmentation and Quantification of White and Brown Adipose Tissues from PET/CT Scans

Sarfaraz Hussein 1, Aileen Green 2, Arjun Watane 3, David Reiter 4, Xinjian Chen 5, Georgios Z Papadakis 6, Bradford Wood 6, Aaron Cypess 6, Medhat Osman 7, Ulas Bagci 8,
PMCID: PMC6421081  NIHMSID: NIHMS1012455  PMID: 28114010

Abstract

In this paper, we investigate the automatic detection of white and brown adipose tissues using Positron Emission Tomography/Computed Tomography (PET/CT) scans, and develop methods for the quantification of these tissues at the whole-body and body-region levels. We propose a patient-specific automatic adiposity analysis system with two modules. In the first module, we detect white adipose tissue (WAT) and its two sub-types from CT scans: Visceral Adipose Tissue (VAT) and Subcutaneous Adipose Tissue (SAT). This process relies conventionally on manual or semi-automated segmentation, leading to inefficient solutions. Our novel framework addresses this challenge by proposing an unsupervised learning method to separate VAT from SAT in the abdominal region for the clinical quantification of central obesity. This step is followed by a context driven label fusion algorithm through sparse 3D Conditional Random Fields (CRF) for volumetric adiposity analysis. In the second module, we automatically detect, segment, and quantify brown adipose tissue (BAT) using PET scans because unlike WAT, BAT is metabolically active. After identifying BAT regions using PET, we perform a co-segmentation procedure utilizing asymmetric complementary information from PET and CT. Finally, we present a new probabilistic distance metric for differentiating BAT from non-BAT regions. Both modules are integrated via an automatic body-region detection unit based on one-shot learning. Experimental evaluations conducted on 151 PET/CT scans achieve state-of-the-art performances in both central obesity as well as brown adiposity quantification.

Index Terms—: Visceral Fat Segmentation, Central Obesity Quantification, Segmentation of Brown Fat, Brown Adipose Tissue, Abdominal Fat Quantification, Co-Segmentation

I. Introduction

BROWN adipose tissue (BAT), brown fat, and white adipose tissue (WAT) are the two types of adipose tissues found in mammals (Figure 1–A). Quantification of WAT and its subtypes is an important task in the clinical evaluation of obesity, cardiac diseases, diabetes, and other metabolic syndromes [1]–[3]. Among them, obesity is one of the most prevalent health conditions. About 30% of the world’s and over 70% of the United States’ adult populations are either overweight or obese [4], [5], causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Central obesity (also known as abdominal obesity) is the excessive buildup of fat in the abdominal region. Traditionally, Body Mass Index (BMI) has been used as a measure of obesity and metabolic health; however, BMI remains inconsistent across subjects, especially for underweight and obese individuals. Instead, volumetry of the abdominal fat, i.e., Visceral Adipose Tissue (VAT), is considered as a reliable, accurate, and consistent measure of body fat distribution. As VAT manifests itself mainly in the abdominal region, it is regarded as an important marker for evaluating central obesity. In clinical literature, the association between VAT and different diseases has been thoroughly discussed. For instance, visceral obesity quantified through Computed Tomography (CT) was found to be a significant risk factor for prostate cancer [6]. In [7], visceral adiposity was found to be a significant predictor of disease-free survival rate in resectable colorectal cancer patients. In contrast to Subcutaneous Adipose Tissue (SAT), VAT was concluded to have an association with incident cardiovascular disease and cancer after adjustment for clinical risk factors and general obesity [8]. Speliotes et al. [9] found VAT as the strongest correlate of fatty liver among all the other factors used in their study. In [10], VAT was found to be an independent predictor of all-cause mortality in men after adjustment for abdominal subcutaneous and liver fat. All these clinical evidences show that the robust and accurate quantification of VAT can help improve identification of risk factors, prognosis, and long-term health outcomes.

Fig. 1.

Fig. 1.

An illustration of different types of adipose tissues in Positron Emission Tomography (PET) and Computed Tomography (CT) scans. (A) signifies the difference at cellular level between Brown Adipose Tissue (BAT) and White Adipose Tissue (WAT). In contrast to WAT, BAT is metabolically active and consumes energy. (B) shows Subcutaneous Adipose Tissue (SAT) and Visceral Adipose Tissue (VAT) in a coronal view of CT. The red boundary illustrates the thin muscular wall separating these two sub-types. The wall remains mostly discontinuous, making SAT-VAT separation significantly challenging. (C) depicts metabolically active BAT in PET (left/middle) and PET/CT fusion (right).

However, automatic separation of VAT from SAT in CT images is not trivial because both VAT and SAT regions share similar intensity characteristics (Hounsfield unit (HU)), and are vastly connected (Figure 1–B). To segregate these two tissues, radiologists usually use various morphological operations along with manual interactions, but this process is subjective and unattractive for routine clinical evaluations. A set of representative slices at or near the umbilical level is often used for quantifying central obesity [11]. Still, these selections do not accurately infer volumetric quantification. Thus, inefficient and inaccurate quantification remains a major obstacle in the clinical evaluation of body fat distribution.

BAT is important for thermogenesis and is considered as a natural defense against hypothermia and obesity [12]. Since BAT is metabolically active, the sensitivity of Positron Emission Tomography (PET) imaging to detect BAT regions is much higher than that of Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) (Figure 1–C). However, PET lacks specificity due to limited structural information. When combined with CT and/or MRI, both specificity and sensitivity are increased due to the inclusion of anatomical sites into the evaluation framework. Despite rapid improvements in the imaging facets of BAT detection, the available methods are limited to manual and semi-automated strategies; hence, they are time-consuming and non-reproducible.

II. Related Work

For abdominal fat (central obesity) quantification, Zhao et al. [13] used intensity profile along the radii connecting sparse points on the outer wall (skin boundary) starting from the abdominal body center. Boundary contour was then refined by a smoothness constraint to separate VAT from SAT. This method, however, does not adapt to obese patients easily, where the neighboring subcutaneous and visceral fat cavities lead to a leakage in the segmentation. In another study, Romero et al. [14] used heuristic search strategies to generate the abdominal wall mask on a small set of representative slices. In a similar fashion, the method in [15] developed a semi-supervised segmentation method based on a hierarchical fuzzy affinity function. Its success is vague when patient specific quantification is considered. Mensink et al. [16] proposed a series of morphological operations; however fine-tuning of the algorithm is difficult, and the step should be repeated almost for every patient when the abdominal wall is too thin. More recently, Kim et al. [17] generated a subcutaneous fat mask using a modified “AND” operation on four different directed masks. However, logical and morphological operations make the whole quantification system vulnerable to inefficiencies. In a more advanced method such as [18], SAT, VAT, and muscle were separated using a joint shape and appearance model, but the reproducibility of the method is highly dependent on the model at hand. Based on a similar idea as in [13], a recent method by Kim et al. [19] estimated the muscle boundary using a convex-hull and then performed smoothing by selecting points that minimize the distance between the contour and the organ regions. However, the performance is dependent on the goodness of fit of the convex-hull. Although the method addresses SAT-VAT separation at a volumetric level, it lacks the use of important appearance features and volumetric smoothing.

There is no automated Computer-Aided Detection (CAD) system proposed yet for BAT quantification using radiology scans. Existing studies are mostly based on the qualitative observations of expert radiologists and nuclear medicine physicians. In those studies, strictly chosen specific anatomical locations were explored for BAT presence [20], [21]. The quantification process was conducted either by manual or semi-automated delineation methods. Since PET images have high contrast, thresholding and clustering-based methods are well-suited for the delineation of uptake regions. Therefore, a simple thresholding was often used for segmenting uptake regions pertaining to BAT, allowing the extraction of volumetric and SUV (i.e., “standardized uptake value”) based metrics. BAT is considered present if there are areas of tissues that are (i) more than 5 mm in diameter, (ii) CT density is restricted to −190 to −30 Hounsfield Units (HU), and (iii) have an SUV of 18F-fluorodeoxyglucose (18F-FDG) of at least 2 g/ml [20], [21] in corresponding PET images. Here it is important to note that in [22], authors chose the thresholding value for SUVmax > 3 g/ml to identify BAT regions. Hence, there is no clear consensus on the choice of SUV for BAT regions. In the last step, regions of interest (ROIs) are manually defined to remove false positive (FP) regions from consideration. Several manual FP removal steps may be required for differentiating uptake between BAT regions and lymph nodes, vessels, bones, and the thyroid [23]. All these manual identifications require extensive user knowledge of the anatomy. Furthermore, in cases where pathologies are present, segregating pathologies from normal variants of 18F-FDG on BAT regions can be extremely challenging [12].

Our contributions:

To the best of our knowledge, the proposed system is the first fully automated method for detecting, segmenting, and quantifying SAT, VAT and BAT regions from radiology scans. First, we propose an automated abdominal and thorax region detection algorithm, based on deep learning features. Second, we develop an unsupervised learning method for separating VAT from SAT using appearance (via Local Outlier Scores) and geometric (via Median Absolute Deviation) cues. For volumetric quantification, we integrate contextual information via a sparse 3D Conditional Random Fields (CRF) based label fusion algorithm. Our work can be considered as the largest central obesity quantification study (151 CT scans) to date, validating accurate region and abdominal fat detection algorithms.

For our contributions in BAT detection and segmentation, we first use a fixed HU interval to identify total adipose tissue (TAT) from CT images. Next, we devise a seed sampling scheme for extracting foreground and background cues from high uptake regions of PET images in head-neck and thorax regions only. The identified seeds are propagated into the corresponding CT scans as well. This is followed by a PET-guided image co-segmentation on the hyper-graph (PET/CT) to delineate potential BAT regions. Lastly, for FP rejection, we propose a novel probabilistic metric that combines Total Variation and Cramér-von Mises distances to differentiate BAT regions from non-BAT regions. Figure 2 shows the overview of the proposed system.

Fig. 2.

Fig. 2.

A flow diagram of the proposed system for whole-body adiposity analysis. The input to the system comprises PET/CT images. Thorax and abdominal regions are detected using deep learning features in the first stage (Section III), followed by Subcutaneous-Visceral adipose tissue segmentation (Section IV) using CT images, and Brown Adipose Tissue detection and quantification (Section V) using PET images.

III. Region Detection in Whole Body CT Volumes

The input to our region detection algorithm is a whole body CT volume IX×Y×Z. Since it is difficult to get a large amount of annotated data for training in medical imaging applications, one should resort to as few training examples as possible. Therefore, we propose a new region detection method based on the concept of one-shot learning, as the learners are trained only on one image to make predictions for the remaining images. The proposed region detection framework locates two slices in the CT volume, i.e., top and bottom of the region of interest (yellow box in Figure 3). Detecting these two slices is challenging since they can easily be confused with similarly appearing slices. Therefore, there is a need for a better feature representation. In this regard, deep learning has recently adapted quite successfully for computer vision and medical imaging applications [24], [25]. To benefit from this rich representation of image features, we use Convolutional Neural Network (CNN) features (i.e., deep learning features) as image attributes extracted from the first fully-connected layer of the pre-trained Fast-VGG Network [26]. The network comprises 5 convolution layers and 3 fully connected layers. The first, second, and fifth convolution layers are followed by a max-pooling layer by convention. In order to have faster operations, 4-pixels stride is used in the first convolution layer. The dimension of the feature vector generated for each slice is equal to 4096. Given the reference annotations of the body regions for one subject’s volumetric image, we find its Euclidean distance with the testing subjects’ images using deep learning features. For training, we use two sets of learners: positive (Dp) and negative (Dn). The testing slice II corresponding to the smallest distance with the positive set and largest distance with the negative set is selected as the desired result. In order to combine the probabilities pertaining to Dp and Dn learners, we use logarithmic opinion pooling [27] as:

P(I)=1ZP(I|Dp)wP(I|Dn)1w, (1)

where Z=IIP(I|Dp)wP(I|Dn)1w is the normalizing constant and w is the weight parameter.

Fig. 3.

Fig. 3.

An overview of the proposed SAT-VAT separation method. Once the abdominal region is detected, Total Adipose Tissue (TAT) is segmented using CT intensity interval known for fat tissue. Initial Subcutaneous-Visceral adipose tissue boundary is estimated by evaluating multiple hypothesis points. Geometric Median Absolute Deviation (MAD) and appearance based Local Outlier Scores (LoOS) are then combined within the 3D Conditional Random Field (CRF) based label fusion.

IV. SAT-VAT Separation and Quantification

The proposed SAT-VAT separation framework consists of 4 steps as illustrated in Figure 3. Since the HU interval for certain substances such as fat, water, and air in CT remains relatively constant, it is straightforward to identify TAT using a clinically validated thresholding interval on the HU space (step 1). In step 2, we identify the initial boundary between VAT and SAT regions by conducting a sparse search over a line connecting the abdominal region center with the skin boundary (white dotted line in Figure 4). For step 3, two refinement methods are presented to remove FP boundary contour points: Median Absolute Deviation (MAD) coefficient and Local Outlier Scores (LoOS). In the final step, we develop a sparse 3D CRF formulation to perform the finest SAT-VAT separation utilizing shape, anatomy, and appearance cues.

Fig. 4.

Fig. 4.

An illustration of skin boundary and hypothesis points along the radii connecting S with its centroid C. For each point (yellow) in S, a set of hypotheses (blue) is generated which is along the line connecting the skin boundary point with the centroid C.

Data for Central Obesity Quantification:

With IRB approval, we retrospectively collected imaging data from 151 subjects who underwent PET/CT scanning (67 men, 84 female, mean age: 57.4). Since CT images are from whole body PET/CT scans (64-slice Gemini TF, Philips Medical Systems); they have low resolution, and no contrast agent was used for scanning. In-plane spacing (xy-plane) of CT image was recorded as 1.17 mm by 1.17 mm, and slice thickness was 5 mm. The scanner parameters for the CT were as follows: 120–140 kV and 33–100 mA (based on BMI), 0.5 s per CT rotation, pitch of 0.9 and 512 × 512 data matrix was used for image fusion. The field of view (FOV) was from the top of the head to the bottom of the feet. The CT reconstruction process was based on filtered back-projection algorithm. No oral or intravenous contrast was administered.

Subjects were selected to have a roughly equal distribution of varying BMIs in order to have an unbiased evaluation. Our evaluation set comprised underweight subjects (N = 20), normal subjects (N = 50), overweight subjects (N= 46), obese subjects (N = 35). UB (>10 years of experience in body imaging with CT and PET/CT interpretation) and GZP (>10 years of experience as a nuclear medicine physician and body imaging fellowship in radiology and imaging sciences) segmented fat regions by separating SAT and VAT boundary and using appropriate image post-processing such as edge-aware smoothing. Complementary to this interpretation, the participating radiologist BW (>20 years of experience in general radiology, body imaging, interventional radiology, and oncology imaging) evaluated SAT and VAT separating boundary qualitatively for both interpreters, and their segmentations were accepted at the clinical level of evaluations. This process is currently the most common procedure in creating a reference standard for segmentation evaluation [28]–[31]. Above 99% of agreement over Dice Similarity Coefficient (i.e. overlap ratio) was found between observers’ evaluations with no statistical difference (t-test, p > 0.5).

Step 1: Total Adipose Tissue (TAT) Segmentation

The input to our fat quantification pipeline is the abdominal volume. By following the clinical convention, we threshold the abdominal CT volume by −190 to −30 HU interval to obtain TAT [32]. We also perform a morphological closing on the input image using a disk with a fixed radius of r followed by a median filtering in an m × m neighborhood. This preprocessing is conducted to perform noise suppression and make the volume smooth for the next phase.

Step 2: Initial Boundary Estimation

We roughly identify the skin boundary of the abdominal region by selecting the longest isoline in the thresholded image (obtained from Step 1). For each point on the skin boundary contour S = {s1, …, sn}, we generate a set of hypotheses H = {h1, …, hu} along the radii connecting S with its centroid C (Figure 4). Each hypothesis (candidate boundary location) is next verified for the possibility of being a boundary location by assessing image gradient information on the line connecting its location to the centroid C (white arrows in Figure 4). The SAT-VAT separation boundary, B = {b1, …, bn}, should satisfy the following condition: hjhj−1 for hjB, and biH, ∀i. As illustrated in Figure 4, hypothesis points change their gradients in close vicinity of boundary B. These boundary points can still be noisy and may get stuck inside the small cavities of the subcutaneous fat. To alleviate such instabilities, the next step proposes a two-stage refinement methodology.

Step 3: Outlier Rejection

Geometric MAD:

In the first stage of the outlier removal, we apply MAD on the distances between B and S. The intuition behind this idea is that the SAT-VAT separation boundary should maintain a smoothly varying distance from the skin boundary. However, the outliers in subcutaneous and visceral cavities usually violate this smooth transition; therefore, we apply MAD on the points between B and S to remove outliers based on the geometric information. The higher outlier sensitivity of MAD in comparison with mean-based method and other methods were studied in [33]. The resulting MAD coefficient Φi, for each boundary point, indicates a score for being an outlier:

Φi=(|dimed(d)|)( med (|di med (d)|))1, (2)

where d is the Euclidean distance between S and B, di = ∥sibi2, and med is the median operator. Boundary locations with high MAD coefficients Φ > t (Section VI–A) are labeled as outliers and subsequently removed from B.

Local Outlier Scores:

Although MAD can be quite effective in outlier rejection, there may still be some boundary locations that potentially lead to the drift of SAT-VAT separation due to the limitations of shape/geometry based attributes. To mitigate the influence of those boundary points, we apply the second stage of the outlier rejection, integrating appearance information through Histogram of Oriented Gradients (HOG) features [34]. For each candidate boundary point, we attach its appearance attributes (HOG) computed in a c × c cell. Since candidate boundary points lie on a high dimensional manifold (non-Euclidean), the normalized correlation distance computes similarities of those points.

Points not mapped together to denser regions in high dimensional feature space are considered as outliers. By following this intuition, we obtain local outlier scores (LoOS) Π, indicating the confidence measure for each point being an outlier [35]:

Π(x)= erf (PLOF(x)2.nPLOF), (3)

where er f is the Gaussian Error Function, and PLOF is the probabilistic local outlier factor based on the ratio of the density around point x and the mean value of estimated densities around all the remaining points. nPLOF is the λ standard deviation of the PLOF.

Step 4: Context Driven Label Fusion Using Sparse 3D CRF

In order to fuse the labels of the boundary candidates across different slices of an image volume and create a fine SAT-VAT separating surface, we use sparse 3D Conditional Random Fields (CRF). In our CRF formulation, a set of N slices is selected to construct a graph G = (V, E), where the nodes (V) consist of only the hypothesis boundary points (not the image pixels) and the edges (E) join neighboring boundary points in a high dimensional feature space. The labels, i.e., outlier and SAT-VAT boundary, are considered as source and sink in the context of our work. We define unary potentials of the CRF as the probabilities obtained after applying k-means clustering to the normalized outlier scores of the third stage:

Θ(ki|vi)=log(P(ki|vi)). (4)

We define the pairwise potentials between the neighboring points vi and v j as:

Ψ(ki,kj|vi,vj)=(11+|ϕiϕj|)[kikj], (5)

where |.| is the L1 distance, [.] is the indicator function, and ϕ is the concatenated vectorized appearance and geometric features. Once unary and pairwise potentials are defined, we seek to minimize the negative logarithm of P(k| G; w) with k labels (k ∈ {0,1}) and weights w as:

k*=argmink,w(log(P(k|G;w)))=argmink,w(viVΘ(ki|vi)+wvi,vjEΨ(ki,kj|vi,vj)) (6)

Equation 6 is solved using graph-cut based energy minimization [36]. Graph-cut for more than two labels is an NP-hard problem and solved using approximate solutions. We have chosen graph-cut for minimizing the energy function defined to solve 3D sparse CRF. In contrast to level sets and loopy belief propagation methods, the graph-cut for two labels returns the global optimum in polynomial time. Additionally, graph cut formulation with a discrete binary solution space of [0, 1] after linear programming relaxation (as in equation 6) is a convex problem. After solving equation 6, we fit a convex-hull around the obtained visceral boundaries and the segment inside the convex-hull is masked as VAT.

V. Brown Fat Detection and Segmentation

The proposed BAT detection and delineation algorithm initiates with the segmentation of fat tissue from CT, followed by an automatic seed selection for BAT. We then perform PET guided CT co-segmentation and lastly propose a false positive rejection method. These 4 steps are depicted in Figure 5.

Fig. 5.

Fig. 5.

An overview of the proposed Brown Adipose Tissue (BAT) detection and segmentation system. Given the head-neck and thorax regions, adipose tissue is identified using CT thresholding intervals (Step 1). Using the corresponding PET scans, segmentation seeds are sampled in accordance with high uptake regions (Step 2). PET-CT co-segmentation is performed using Random Walk (Step 3) followed by false positive removal (Step 4) using Total Variation and Cramér-von Mises distances.

Data for Quantification of BAT:

This retrospective study was institutional review board (IRB) approved and the need for written informed consent was waived. Thirty-seven adult (>21 years) oncology patients with FDG BAT uptake were identified from PET/CT studies from 2011–2013. The control cohort consisted of 74 adult oncology patients without detectable FDG BAT uptake matched for BMI/gender/season. The oncology patients have malignant tumors which were all biopsy proven. From the 4,458 FDG PET/CT reports in our database, there were 46 unique adult patients whose PET/CT reports specified the presence of BAT. Eight patients were excluded for only negligible PET/CT evidence of BAT reported in the paravertebral region. Another patient was excluded since FDG uptake was associated with interatrial lipomatous hypertrophy. Apart from these, the final selection of PET/CT scans was confirmed based on the consenskus agreement of the participating nuclear medicine physicians, radiologist, and clinician. A total of 37 cases of adult BAT patients without FDG avid liver lesions were included in this study.

An intravenous injection of 5.18 MBq/kg (0.14 mCi/kg) 18F-FDG was administered to patients with a blood glucose level ≤ 200 mg/dL after fasting for at least four hours. Patients sat in a quiet room during the 60 minute uptake phase and were instructed to remain quiet and refrain from physical activity. All scans were acquired using a Gemini TF (Philips Medical Systems) PET/CT scanner. There were no statistically significant differences between the two cohorts in gender, race, BMI, height, and weight (p > 0.05). The voxel dimensions in PET scans were 4 mm ×4 mm ×4 mm. The PET component of the PET/CT scanner was composed of lutetium-yttrium oxyorthosilicate (LYSO)-based crystal. Emission scans were acquired at 1–2 min per bed position. The FOV was from the top of the head to the bottom of the feet. The three-dimensional (3D) whole-body acquisition parameters were 144 × 144 matrix and 18 cm FOV with a 50% overlap. The reconstruction process in the scanner was based on the 3D Row Action Maximum Likelihood Algorithm (RAMLA) [37].

To develop the reference standard, we used the manual delineation from three experts. First, the participating nuclear medicine physicians (MO: >20 years of experience, GZP: >10 years of experience, and AG: >10 years of experience), agreed on the predetermined SUV cut-off. GZP segmented the BAT regions, blind to consensus segmentation of MO and AG. Therefore, two delineations were considered in the evaluation, although three experts worked for the segmentation of BAT regions. When segmenting the BAT area, interpreters were provided viewer/fusion software, as well as manual, automated, and semi-automated contouring methods. The interpreters used both CT images (to define anatomical sites and fat tissue with the predefined HU interval) and PET images (with 2.0 g/ml of cut-off SUVmax) when delineating BAT regions. The fusion of PET with thresholded CT images provided uptake only in fat regions, removing most of the false positive uptake regions from consideration. Next, the interpreters used thresholding on PET uptake within an ROI (roughly drawn by the experts using manual contouring tool) for each detected BAT region. Finally, expert interpreters performed necessary corrections on the segmented PET uptake using manual contouring tools guaranteeing that the segmentations do not overlap with muscle, lymph nodes, and tumors.

Step 1: Segmenting Fat Tissue from CT Scans

Standard reference for estimating fat tissues in CT is by means of the computed planimetric method or with a fixed attenuation range from −190 to −30 HU (Section IV-Step 1). In our implementation, we extend this range into [−250, −10] HU to be more inclusive. Prior to this operation, we employ a 3D median filtering to smooth the images. Resulting segmentations will form the basis for differentiating BAT from non-BAT regions.

Step 2: Automatic Seed Selection

BAT regions are metabolically active, and studies reported that at least an SUVmax = 2 g/ml was observed in BAT regions [20], [21]. However, it is important to note that 18F-FDG doesn’t only attach to BAT but to tumor regions as well; hence, high SUVmax does not necessarily indicate BAT presence. To accurately characterize BAT, the anatomical/structural counterpart of the PET images is required. Since the BAT regions have SUVmax ≥ 2 g/ml, we threshold the head/neck and thorax regions accordingly by following the automated seeding module of the joint segmentation method proposed in [38]. The resulting thresholded PET images most likely include numerous disconnected regions since many pixels may have SUV larger than 2 g/ml due to high metabolic activities. For each disconnected region, pixels with maximum SUVs are defined as foreground seeds. In order to set background pixels, we explore the neighborhood of foreground pixels by searching in 8-directions. We find background locations by marking the first pixel with less than or equal to 40% of the SUVmax (i.e., conventional percentage for clinical PET thresholding). Those pixels are set as background seeds. The final step is to insert additional background seeds into the pixels lying in the spline connecting background seeds as explained in [38]. Once the background and foreground seeds are identified, Random Walk (RW) co-segmentation is employed by solving equation 8 for the unknown labels of the pixels (nodes).

Step 3: PET-Guided Random Walk Image Co-Segmentation

It is reasonable to consider the BAT boundary determination process as a co-segmentation problem where the contributions of PET and CT in segmentation procedure are unequal. Based on our co-segmentation algorithm proposed in [38], herein we introduce a PET-guided RW co-segmentation algorithm with asymmetric weights. This is based on the fact that the influence of PET on BAT segmentation results is higher than that of CT.

PET and CT images are in registration owing to PET/CT scanner’s hybrid reconstruction properties. Any inconsistencies due to breathing and different timing of PET and CT imaging in the PET/CT scanner are minimized using deformable image registration as a post-processing step. Considering this fact, two graphs pertaining to CT and PET, GCT = (VCT, ECT) and GPET = (VPET, EPET), can be combined to define a hyper-graph GH = (VH, E H), on which the co-segmentation algorithm is applied. Note that for each image, we define a connected undirected graph G with nodes vV and edges eEV × V. Since performing a random walk on the product graph GH is equivalent to performing a simultaneous random walk on the graphs GCT and GPET, we define the nodes and edges as follows:

VH={(viCT,viPET):viCTVCTviPETVPET},
EH={((viCT,viPET),(vjCT,vjPET)):(viCT,vjCT)ECT(viPET,vjPET)EPET}. (7)

Similarly, the combinatorial Laplacian matrix definition LH (that includes labeled and unlabeled nodes as well as weight parameters w of the imaging data) of the product graph GH is updated from conventional RW formulation to co-segmentation as LH = (LCT)α ⊗ (LPET)θ, where α and θ are constants, 0 ≤ α < θ ≤ 1, and ⊗ denotes direct product. Lastly, the probability distribution of intensity values for the product lattice xH is defined as the direct multiplication of the initial probability distributions of xCT and xPET as xH = (xCT)ζ ⊗ (xPET)η, where ζ and η are used to optimize initial probability distributions subject to the constraint 0 ≤ ζ < η ≤ 1. The desired random walk probabilities are equivalent to the solution of the combinatorial Dirichlet problem [40] as:

D[xH]=12(xH)TLHxH=12eijEHwijH(xiHxjH)2, (8)

where a combinatorial harmonic function of x H, satisfying the Laplace equation ∇2xH = 0, minimizes equation 8. To emphasize the effect of PET more than that of CT for BAT region delineation, we select combination of (α, θ) as (0.05, 0.95) and (ζ, η) as (0.3, 0.7) after various empirical evaluations.

Step 4: Differentiating BAT From Non-BAT Regions

BAT regions are not easily separable from other fat regions in CT because WAT and BAT follow the same intensity patterns (fixed HU interval). Conventionally, intensity values of fat regions can be considered to follow a normal distribution with a known mean μ and variance σ (i.e.,C=N(μ,σ)). Since the PET and CT images are co-registered, the segmented regions in both PET and CT are equivalent: rPET = rCT. We next formulate the problem of differentiating BAT from non-BAT regions as follows. The intensity distribution p, obtained from rCT correspondence of each segmented uptake region rPET, should be in a close vicinity of C, such that, d(p,C)<ϵ, where d ∈ [0, 1] is a distance metric measuring whether p belongs to some class of distribution C or not. We postulate that p is sufficiently far from C when lymph nodes, tumor regions, or other non-fat tissues are involved in rCT.

For the probabilistic distance metric in our framework (d), we use two complementary distance measures: total distance variation (dTV) and Cramér-von Mises distance (dCM). dTV is equivalent to the L1-norm and can be formulated as half of the L1-distance:

dTV=12xΩ|p(x)C(x)|, (9)

where Ω is a measurable space on which p and C are defined. Complementary to dTV, we also use dCM to judge the goodness of fit between the two distributions by emphasizing L2-distance. In other words, dCM is effective in situations where two distributions under comparison have dissimilar shapes (although similar mean and variance can still be captured with dTV). The Cramér-von Mises statistics is defined as:

dCM=minx|P(x)ψ(x)|, (10)

where ψ(x) and P(x) are cumulative distribution functions of C(x) and p(x), respectively. The proposed distance d is simply formed by integrating dCM and dTV as:

d=dCM2+dTV2. (11)

If d < ϵ, our differentiation system accepts the BAT proposal/hypothesis. Note also that d is a distance-metric because (i) it is symmetric (d(C,p)=d(p,C)), (ii) non-negative (as it spans from 0 to 1, d ≥ 0), (iii) d(p,C)=0 only when p=C , and (iv) it satisfies the triangleequality as:

d(p,C)d(p,D)+d(D,C). (12)

VI. Results

A. SAT-VAT Separation

Parameters and evaluations metrics:

The following parameters were noted for reproducible research in our experiments: r = 10, m = λ = 3, t = 2.5, c = 14, w = 0.5, and N = 5. For evaluation of region detection, we used Intersection over Union (IoU) [43] given by: Overlap(RG,RS)max(|RG|,|RD|), where RG and RD were reference standard and automatically detected abdominal region, respectively. We also reported region detection results using the absolute difference in slices between RG and RD. For segmentation evaluation, we used widely accepted Dice Similarity Coefficient (DSC): 2|IGIS||IG|+|IS|, where IG and IS were reference standard and automatically segmented fat region, respectively. Moreover, we measured the volumetric fat difference (in milliliters, mL) between true and segmented fat regions with Mean Absolute Error (MAE) metric.

Comparisons:

For abdominal region detection, the upper boundary of the region was defined by the superior aspect of the liver, whereas the lower boundary was defined by the bifurcation of the abdominal aorta into the common iliac arteries [44]. As can be seen in Table I, the proposed region detection method significantly outperformed registration based methods such as Scale Invariant Feature Transform (SIFT) flow [39]. Moreover, the proposed combination of positive and negative learners (Equation 1) provided percentage improvement of 7.9% in IoU and 6.5% reduction in average absolute slice difference, compared to only a positive learner with deep learning features.

Table I.

Abdominal Region Detection Results Measured by Intersection Over Union (Higher the Better) and Average Absolute Slice Difference (lower the Better) Along With Standard error of the Mean (SEM)

Methods IoU
(SEM)
Avg.Abs. slice diff.
(SEM)
SIFT Flow [39] 0.263 (0.019) 90.22 (2.71)
Deep learning features [26] with Positive learner only 0.744 (0.016) 50.28 (0.66)
Proposed method (Equation 1) 0.803 (0.014) 47.01 (0.62)

We also performed extensive evaluations for SAT-VAT segmentation and quantification. We compared our method with One-class SVM, Zhao et al. [13], Random Sample Consensus (RANSAC) [41], and the state-of-the-art outlier detection method by Mahito et al. [42], which was based on iterative data sampling. In addition, we showed the results of our proposed framework’s individual steps to provide progressive improvement in accuracy, i.e., Geometric MAD, Appearance LoOS, and the final context driven fusion using sparse 3D CRF. As mentioned in Section IV, two delineations from expert interpreters were considered for the segmentation evaluation of SAT and VAT.

Figure 6 shows the volume rendering of subjects along with VAT and SAT delineations for qualitative evaluation. Highly accumulated VAT (shown in red) in obese subjects is observed. DSC and MAE results for SAT and VAT are shown in Table II where significant improvements were obtained compared to other methods. The proposed method achieved around 40% lesser MAE, compared to [13] and other methods.

Fig. 6.

Fig. 6.

Visceral Adipose Tissue (red) and Subcutaneous Adipose Tissue (green) segmentations are illustrated for two subjects (one with BMI<25, another with BMI>30) at the chosen abdominal slice level along with their volume renderings. Several abdominal slices are also shown for central adiposity accumulation.

Table II.

Segmentation and Quantification results for Subcutaneous Adipose Tissue (SAT) and Visceral Adipose Tissue (VAT) Evaluated by Dice Similarity Coefficient (Higher the Better) and Mean Absolute Error (Lower the Better) Along With Standard Error of the Mean (SEM)

Methods SAT
DSC
(SEM)
VAT
DSC
(SEM)
SAT
MAE in mL
(SEM)
VAT
MAE in mL
(SEM)
One-class SVM 0.886 (0.004) 0.842 (0.006) 16.695 (0.963) 16.696 (0.963)
Zhao et al. [13] 0.895 (0.003) 0.840 (0.004) 11.183 (0.350) 11.184 (0.350)
RANSAC [41] 0.913 (0.003) 0.861 (0.005) 14.126 (1.179) 14.126 (1.179)
Mahito et al. [42] 0.871 (0.003) 0.825 (0.006) 18.331 (1.230) 18.331 (1.229)
Geometric MAD 0.896 (0.006) 0.876 (0.005) 13.665 (0.872) 13.666 (0.872)
Appearance LoOS 0.925 (0.002) 0.885 (0.004) 11.815 (0.857) 11.816 (0.856)
Proposed method 0.943 (0.003) 0.919 (0.003) 6.703 (0.466) 6.706 (0.466)

Computation time:

The computation time for SAT-VAT segmentation method was less than 2s/slice in our case, and less than 2.5s/slice in other methods that we compared. The unoptimized MATLAB implementation of Geometric MAD took approximately 0.45s/slice, that of appearance LoOS ran on average in 0.71s/slice, followed by an average of 1.96s/slice for 3D CRF on Intel Xeon Quad Core CPU @ 2.80 GHz and 24.0 GB RAM. Note also that none of the methods (in the comparison experiments) required any manual intervention.

B. Brown Fat Detection and Delineation

Evaluation of Head-Neck and Thorax Region Detection:

Anatomically, head-neck/thorax region was defined from the superior aspect of the globes to 5 mm below the base of the lungs [44]. We employed IoU as our region detection evaluation metric. Table III shows comparative evaluations of different methods with the proposed combination of positive and negative learners. The percentage improvement of 22.4% in IoU was observed over SIFT Flow [39]. Moreover, the combination of positive and negative learners using logarithmic opinion pooling led to the percentage improvement of further 3% over the instance when only a positive learner was used.

Table III.

Head-Neck and Thorax Region Detection Results Measured by Intersection Over Union (IoU) and Average Absolute Slice Difference Along With Standard Error of the Mean (SEM)

Methods IoU
(SEM)
Avg.Abs. slice diff.
(SEM)
SIFT How [39] 0.589 (0.022) 65.47 (4.29)
Deep learning features [26] with Positive learner only 0.721 (0.018) 37.59 (3.05)
Proposed method (Equation 1) 0.743 (0.006) 34.52 (1.28)

Evaluation of BAT Delineation:

For quantitative evaluation of the delineation component of the proposed system, we compared True Positive (TP) and False Positive (FP) volume fractions of the segmented tissues with the manual delineation of the experts (mentioned in Section V).

We computed the average performance over the two delineations (Sensitivity (TP): 92.3 +/− 10.1%, Specificity (100-FP): 82.2+/−18.9 %). Metabolic volumes derived by the proposed segmentation were correlated with expert-derived volumes, and the resulting correlation coefficient, after linear regression, was found to be R2 = 0.97 (p < 0.01). Example segmentation results at three different anatomical locations are shown in Figure 7 A–F for qualitative evaluations. In the ROI based methods, ROIs were drawn by the user (expert annotator) to “roughly” include BAT regions, while excluding the rest (Figure 7–C and E).

Fig. 7.

Fig. 7.

For three different anatomical levels (columns), row (A) shows reference standards (white); row (B) demonstrates the results from CT thresholding where pink (inner) and blue (outer) contours show brown fat delineation (blue contour shows fat region near skin boundary which leaks into the body cavity and also overlaps with pink contour as in the first column); row (C) comprises the results from ROI (Region of Interest) based CT thresholding, where orange boxes show user drawn ROIs and blue contours are the brown fat delineation results; row (D) shows the results from conventional PET thresholding, where green contours show output BAT delineations; row (E) depicts the ROI based PET thresholding; and row (F) demonstrates the proposed algorithm’s delineation results using PET and CT jointly. (G) Dice Similarity Coefficients (DSC) of the proposed method in comparison with ROI based PET thresholding, PET thresholding, ROI based CT thresholding, and CT thresholding methods are shown.

Comparisons With Other Approaches:

We compared our method with the conventional thresholding approaches for both CT and PET (for SUV ≥ 2 g/ml) and then applied the logical AND operation to the two masks followed by a manual FP removal step. These were the methods used in previous studies to measure BAT boundary and volume [20]. Figure 7–G compares DSC of the proposed method with respect to the baseline methods. Our proposed system outperformed other methods by a significant margin.

Evaluation of BAT Region Proposals:

We computed TP and FP ratios over 111 PET/CT scans, each labeled as either BAT-positive or BAT-negative. Our results show that in 110 out of 111 scans (99.1%), BAT proposals’ acceptance/rejection worked quite well. In only one scan, our system identified one region as non-BAT while the region was originally BAT. This false identification was due to significantly smaller size of the BAT region (<4 mm), potentially due to the partial volume effect.

VII. Discussion and Concluding Remarks

With obesity being one of the most prevalent health conditions in the world, its quantification especially in the abdominal region is vital. In this regard, the quantification of visceral fat is significant. In parallel, since BAT is found to be negatively correlated with BMI [12], its quantification is essential for many clinical evaluations including obesity and metabolic syndromes. For central obesity quantification, we presented an unsupervised method for the separation of visceral and subcutaneous fat at the whole-body and body-region levels. In order to keep the proposed method fully automated, we also proposed a minimally supervised body region detection method where training was performed on a single subject. We ascribe the improved performance of our method to robust outlier rejection using geometric and appearance attributes followed by context driven label fusion. Evaluations were performed on non-contrast CT volumes from 151 subjects. Experimental results indicate that the proposed system has a great potential to aid in detecting and quantifying central obesity in routine clinical evaluations.

For brown fat quantification, we offered a fully automated image analysis pipeline using PET/CT scans. Specifically, we proposed a novel approach to automatically detect and quantify BAT from PET/CT scans involving PET guided CT co-segmentation, and a new probabilistic distance metric combining Total Variation and Cramér-von Mises distances. The proposed approach has a potential to assist in the clinical efforts to counteract obesity in the most natural way. We performed extensive evaluations and our methods achieved state-of-the-art performances.

Since PET imaging provides biochemical and physiological activity, it remains the most accepted and preferred modality to study metabolically active BAT regardless of the radiation exposure. It is important to note that most of the BAT examples are obtained from the clinical trials or routine examination of different diseases. Moreover, there are a limited number of clinical trials solely focusing on BAT detection, quantification, and its role in metabolic syndrome, obesity, and other diseases. In order to reduce concerns regarding the ionizing radiation induced by PET/CT, one may consider reducing the radiation exposure of PET/CT scans. There are studies that show that low-dose CT scans have similar tissue HU levels as those in routine CT scans with no diagnostic differences noted, suggesting the use of low(er) dose CT scans in routine examinations [45]. On the other hand, lowering radiation dose in PET equipment is more difficult and expensive than its CT counterpart [46], [47]. Furthermore, the choice of a radiotracer is another concern while reducing the radiation dose. This is because the half-life of the most commonly used tracers is short and the patient size can affect image quality considerably [47]. Despite all the financial and logistical disadvantages, lowering the dose in the PET scans is a priority for the manufacturers, radiologists, and nuclear medicine physicians [46], [47]. With low dose PET/CT imaging, the cost-benefit ratio can be significantly improved for studies related to obesity and metabolic syndromes.

Other imaging modalities are also being explored for BAT detection and quantification. The application of MRI in human subjects is promising due to the lack of ionizing radiation and its excellent soft tissue contrast feature. However, current MR sequences do not have high sensitivity and specificity in identifying and quantifying BAT regions. Among a few works considering MR as a potential imaging modality for studying BAT, the use of Multi-point Dixon and multi-echo T2 spin MRI had been explored in mice [48]. Fuzzy c-means clustering was used for initial segmentation of BAT followed by a two-layer feed-forward neural network for the separation of BAT from WAT. However, high-field MRI is required for better separation of metabolically active fat regions from the rest and there is no optimal sequence developed yet to do this task. Precise evaluation of BAT with MRI is not feasible in clinical routines and the current standards are still in favor of PET/CT.

Another alternative imaging modality to PET/CT for detection of BAT activation is contrast-enhanced ultra-sound (CEUS) [49], a non-invasive and non-ionizing imaging modality. As the BAT activation was associated with an increased blood flow to the tissue, it can be measured by assessing the BAT perfusion. CEUS was found to detect increased BAT blood flow during cold exposure relative to warmer conditions. Although the reported experiments were preliminary with evaluations restricted to young and healthy males (mean age, 24.0 ± 2.4 years; mean body mass index, 23.4 ± 3.5 kg/m2), BAT assessment may potentially be performed using CEUS in the future.

It should also be noted that the respiratory motion can be a potential source of error in co-segmentation. It is well known that the respiratory motion can affect PET and CT scans differently due to the possible differences in scan duration. This may induce residual registration mismatch between the two systems and eventually can lead to errors in BAT delineation. In such cases, motion correction algorithms as well as additional deformable registration methods can be employed to minimize registration errors prior to BAT segmentation.

Our study has some limitations to be noted. First, when young(er) subjects are scanned with their arms down, muscle may be observed as fat tissue due to photon depletion caused by high bone density. Although we did not observe this issue in the data set presented in this study, it may be a pressing issue that must be addressed when generalizing the quantification software into a larger cohort of studies such as clinical trials. Second, the partial volume effect can degrade the detection of small BAT deposits such as para-spinal BAT, particularly when slice thickness in PET is large. Based on our recent findings in [50], [51], our future study will address these two limitations by integrating partial volume correction and denoising methods into the proposed system. Inspired by a recent study [52], another step will be to design a fuzzy object modeling approach for the correction of incorrectly separated muscle and fat tissues due to photon depletion.

Appendix

Abbreviations used in this paper in alphabetical order:

BAT: Brown Adipose Tissue, BMI: Body Mass Index, CAD: Computer Aided Detection, CNN: Convolutional Neural Network, CRF: Conditional Random Fields, CT: Computed Tomography, DSC: Dice Similarity Coefficient, FDG: Fluorodeoxyglucose, FP: False Positive, HOG: Histogram of Oriented Gradients, HU: Hounsfield unit, IoU: Intersection over Union, IRB: Institutional Review Board, LoOS: Local Outlier Scores, MAD: Median Absolute Deviation, MAE: Mean Absolute Error, MRI: Magnetic Resonance Imaging, PET: Positron Emission Tomography, RANSAC: Random Sample Consensus, ROI: Region of Interest, RW: Random Walk, SAT: Subcutaneous Adipose Tissue, SEmml: Standard Error of the Mean, SIFT: Scale Invariant Feature Transform, SUV: Standardized Uptake Value, TAT: Total Adipose Tissue, TP: True Positive, VAT: Visceral Adipose Tissue, WAT: White Adipose Tissue

Contributor Information

Sarfaraz Hussein, Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL 32826 USA..

Aileen Green, Cardiology Clinic of Muskogee, Muskogee, OK 74401 USA..

Arjun Watane, Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL 32826 USA..

David Reiter, National Institutes of Health, Bethesda 20892 MD, USA..

Xinjian Chen, Soochow University, Suzhou 215006, China..

Medhat Osman, Nuclear Medicine Department, University of St. Louis, St. Louis, MO, USA..

Ulas Bagci, Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL 32826 USA (ulasbagci@gmail.com)..

References

  • [1].Preston SH, “Deadweight?—The influence of obesity on longevity,” N. Eng. J. Med, vol. 352, no. 11, pp. 1135–1137, 2005. [DOI] [PubMed] [Google Scholar]
  • [2].Cefalu WT et al. , “Insulin resistance and fat patterning with aging: Relationship to metabolic risk factors for cardiovascular disease,” Metabolism, vol. 47, no. 4, pp. 401–408, April 1998. [DOI] [PubMed] [Google Scholar]
  • [3].Gastaldelli A et al. , “Metabolic effects of visceral fat accumulation in type 2 diabetes,” J. Clin. Endocrinol. Metabolism, vol. 87, no. 11, pp. 5098–5103, 2002. [DOI] [PubMed] [Google Scholar]
  • [4].Ng M et al. , “Global, regional, and national prevalence of overweight and obesity in children and adults during 1980–2013: A systematic analysis for the global burden of disease study 2013,” Lancet, vol. 384, no. 9945, pp. 766–781, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].National Center for Health Statistics and Centers for Disease Control and Prevention: Health, United States, With Special Feature on Racial and Ethnic Health Disparities, 2015. [PubMed]
  • [6].von Hafe P, Pina F, Pérez A, Tavares M, and Barros H, “Visceral fat accumulation as a risk factor for prostate cancer,” Obesity, vol. 12, no. 12, p. 1930, 2004. [DOI] [PubMed] [Google Scholar]
  • [7].Moon H-G et al. , “Visceral obesity may affect oncologic outcome in patients with colorectal cancer,” Ann. Surgical Oncol, vol. 15, no. 7, pp. 1918–1922, July 2008. [DOI] [PubMed] [Google Scholar]
  • [8].Britton KA, Massaro JM, Murabito JM, Kreger BE, Hoffmann U, and Fox CS, “Body fat distribution, incident cardiovascular disease, cancer, and all-cause mortality,” J. Am. College Cardiol, vol. 62, no. 10, pp. 921–925, September 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Speliotes EK et al. , “Fatty liver is associated with dyslipidemia and dysglycemia independent of visceral fat: The framingham heart study,” Hepatology, vol. 51, no. 6, pp. 1979–1987, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Kuk JL, Katzmarzyk PT, Nichaman MZ, Church TS, Blair SN, and Ross R, “Visceral fat is an independent predictor of all-cause mortality in men,” Obesity, vol. 14, no. 2, pp. 336–341, 2006. [DOI] [PubMed] [Google Scholar]
  • [11].Tong Y, Udupa JK, and Torigian DA, “Optimization of abdominal fat quantification on CT imaging through use of standardized anatomic space: A novel approach,” Med. Phys, vol. 41, no. 6, p. 063501, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Cypess AM et al. , “Identification and importance of brown adipose tissue in adult humans,” N. Eng. J. Med, vol. 360, no. 15, pp. 1509–1517, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Zhao B et al. , “Automated quantification of body fat distribution on volumetric computed tomography,” J. Comput. Assist. Tomogr, vol. 30, no. 5, pp. 777–783, 2006. [DOI] [PubMed] [Google Scholar]
  • [14].Romero D, Ramirez JC, and Marmol A, “Quanification of subcutaneous and visceral adipose tissue using CT,” in Proc. IEEE Int. Workshop Med. Meas. Appl. (MeMea), April 2006, pp. 128–133. [Google Scholar]
  • [15].Pednekar A, Bandekar AN, Kakadiaris IA, and Naghavi M, “Automatic segmentation of abdominal fat from CT data,” in Proc. 7th IEEE Workshops Appl. Comput. Vis. (WACV/MOTION), January 2005, pp. 308–315. [Google Scholar]
  • [16].Mensink SD, Spliethoff JW, Belder R, Klaase JM, Bezooijen R, and Slump CH, “Development of automated quantification of visceral and subcutaneous adipose tissue volumes from abdominal CT scans,” Proc. SPIE, vol. 7963, p. 79632Q, March 2011. [Google Scholar]
  • [17].Kim YJ, Lee SH, Kim TY, Park JY, Choi SH, and Kim KG, “Body fat assessment method using CT images with separation mask algorithm,” J. Digit. Imag, vol. 26, no. 2, pp. 155–162, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Chung H, Cobzas D, Birdsell L, Lieffers J, and Baracos V, “Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis,” Proc. SPIE, vol. 7261, p. 72610, March 2009. [Google Scholar]
  • [19].Kim YJ et al. , “Computerized automated quantification of subcutaneous and visceral adipose tissue from computed tomography scans: Development and validation study,” J. Med. Internet Res., Med. Inf, vol. 4, no. 1, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Muzik O, Mangner TJ, Leonard WR, Kumar A, Janisse J, and Granneman JG, “15O PET measurement of blood flow and oxygen consumption in cold-activated human brown fat,” J. Nucl. Med, vol. 54, no. 4, pp. 523–531, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Cohade C, Osman M, Pannu H, and Wahl R, “Uptake in supraclavicular area fat (‘USA-fat’): Description on 18F-FDG PET/CT,” J. Nucl. Med, vol. 44, no. 2, pp. 170–176, 2003. [PubMed] [Google Scholar]
  • [22].Baba S, Jacene HA, Engles JM, Honda H, and Wahl RL, “CT hounsfield units of brown adipose tissue increase with activation: Preclinical and clinical studies,” J. Nucl. Med, vol. 51, no. 2, pp. 246–250, 2010. [DOI] [PubMed] [Google Scholar]
  • [23].Gilsanz V, Chung SA, Jackson H, Dorey FJ, and Hu HH, “Functional brown adipose tissue is related to muscle volume in children and adolescents,” J. Pediatrics, vol. 158, no. 5, pp. 722–726, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Krizhevsky A, Sutskever I, and Hinton GE, “Imagenet classification with deep convolutional neural networks,” in Proc. Adv. Neural Inf. Process. Syst, 2012, pp. 1097–1105. [Google Scholar]
  • [25].Shin H-C et al. , “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,” IEEE Trans. Med. Imag, vol. 35, no. 5, pp. 1285–1298, May 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Chatfield K, Simonyan K, Vedaldi A, and Zisserman A, “Return of the devil in the details: Delving deep into convolutional nets,” in Proc. Brit. Mach. Vis. Conf, July 2014. [Google Scholar]
  • [27].Hinton GE, “Products of experts,” in Proc. 9th Int. Conf. Artif. Neural Netw., vol. 1 1999, pp. 1–6. [Google Scholar]
  • [28].Warfield SK, Zou KH, and Wells WM, “Simultaneous truth and performance level estimation (STAPLE): An algorithm for the validation of image segmentation,” IEEE Trans. Med. Imag, vol. 23, no. 7, pp. 903–921, July 2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Sabuncu MR, Yeo BT, Leemput KV, Fischl B, and Golland P, “A generative model for image segmentation based on label fusion,” IEEE Trans. Med. Imag, vol. 29, no. 10, pp. 1714–1729, October 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Udupa JK et al. , “A framework for evaluating image segmentation algorithms,” Comput. Med. Imag. Graph, vol. 30, no. 2, pp. 75–87, March 2006. [DOI] [PubMed] [Google Scholar]
  • [31].Kohlberger T, Singh V, Alvino C, Bahlmann C, and Grady L, “Evaluating segmentation error without ground truth,” in Medical Image Computing and Computer-Assisted Intervention. New York, NY, USA: Springer, 2012, pp. 528–536. [DOI] [PubMed] [Google Scholar]
  • [32].Yoshizumi T et al. , “Abdominal fat: Standardized technique for measurement at CT,” Radiology, vol. 211, no. 1, pp. 283–286, 1999. [DOI] [PubMed] [Google Scholar]
  • [33].Leys C, Ley C, Klein O, Bernard P, and Licata L, “Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median,” J. Experim. Soc. Psychol, vol. 49, no. 4, pp. 764–766, 2013. [Google Scholar]
  • [34].Dalal N and Triggs B, “Histograms of oriented gradients for human detection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit, June 2005, vol. 1 no. 1, pp. 886–893. [Google Scholar]
  • [35].Kriegel H-P, Kröger P, Schubert E, and Zimek A, “LoOP: Local outlier probabilities,” in Proc. 18th ACM Conf. Inf. Knowl. Manage, 2009, pp. 1649–1652. [Google Scholar]
  • [36].Boykov Y, Veksler O, and Zabih R, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 23, no. 11, pp. 1222–1239, November 2001. [Google Scholar]
  • [37].Browne J and Pierro ABD, “A row-action alternative to the EM algorithm for maximizing likelihood in emission tomography,” IEEE Trans. Med. Imag, vol. 15, no. 5, pp. 687–699, October 1996. [DOI] [PubMed] [Google Scholar]
  • [38].Bagci U et al. , “Joint segmentation of anatomical and functional images: Applications in quantification of lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT images,” Med. Image Anal, vol. 17, no. 8, pp. 929–945, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Liu C, Yuen J, Torralba A, Sivic J, and Freeman WT, “SIFT flow: Dense correspondence across different scenes,” in Proc. Eur. Conf. Comput. Vis, 2008, pp. 28–42. [Google Scholar]
  • [40].Grady L, “Random walks for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 28, no. 11, pp. 1768–1783, November 2006. [DOI] [PubMed] [Google Scholar]
  • [41].Fischler MA and Bolles R, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981. [Google Scholar]
  • [42].Sugiyama M and Borgwardt K, “Rapid distance-based outlier detection via sampling,” in Proc. Adv. Neural Inf. Process. Syst, 2013, pp. 467–475. [Google Scholar]
  • [43].Jain M, van Gemert J, Jégou H, Bouthemy P, and Snoek CG, “Action localization with tubelets from motion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit, June 2014, pp. 740–747. [Google Scholar]
  • [44].Udupa JK et al. , “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Imag. Anal, vol. 18, no. 5, pp. 752–771, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [45].Ono K et al. , “Low-dose CT scan screening for lung cancer: Comparison of images and radiation doses between low-dose CT and follow-up standard diagnostic CT,” SpringerPlus, vol. 2, no. 1, p. 393, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [46].Orenstein BW, “Reducing PET dose,” Radiol. Today, vol. 17, no. 1, p. 22, 2015. [Google Scholar]
  • [47].Akin EA and Torigian DA, “Considerations regarding radiation exposure in performing FDG-PET-CT,” Image Wisely, 2012. [Google Scholar]
  • [48].Prakash KB, Srour H, Velan SS, and Chuang K-H, “A method for the automatic segmentation of brown adipose tissue,” Magn. Reson. Mater. Phys., Biol. Med, vol. 29, no. 2, pp. 287–299, 2016. [DOI] [PubMed] [Google Scholar]
  • [49].Flynn A et al. , “Contrast-enhanced ultrasound: A novel noninvasive, nonionizing method for the detection of brown adipose tissue in humans,” J. Am. Soc. Echocardiogr, vol. 28, no. 10, pp. 1247–1254, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].Xu Z, Bagci U, Seidel J, Thomasson D, Solomon J, and Mollura DJ, “Segmentation based denoising of PET images: An iterative approach via regional means and affinity propagation,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent, 2014, pp. 698–705. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Xu Z, Bagci U, Gao M, and Mollura DJ, “Highly precise partial volume correction for PET images: An iterative approach via shape consistency,” in Proc. IEEE 12th Int. Symp. Biomed. Imag. (ISBI) October 2015, pp. 1196–1199. [Google Scholar]
  • [52].Wang H, Udupa JK, Odhner D, Tong Y, Zhao L, and Torigian DA, “Automatic anatomy recognition in whole-body PET/CT images,” Med. Phys, vol. 43, no. 1, pp. 613–629, 2016. [DOI] [PubMed] [Google Scholar]

RESOURCES