Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Nov 1.
Published in final edited form as: Acad Radiol. 2020 Aug 6;28(11):1481–1487. doi: 10.1016/j.acra.2020.07.010

Deep Learning-based Quantification of Abdominal Subcutaneous and Visceral Fat Volume on CT Images

Andrew T Grainger 1, Arun Krishnaraj 1, Michael H Quinones 1, Nicholas J Tustison 1, Samantha Epstein 1, Daniela Fuller 1, Aakash Jha 1, Kevin L Allman 1, Weibin Shi 1
PMCID: PMC7862413  NIHMSID: NIHMS1612473  PMID: 32771313

Abstract

Rationale and Objectives:

Develop a deep learning-based algorithm using the U-Net architecture to measure abdominal fat on computed tomography (CT) images.

Materials and Methods:

Sequential CT images spanning the abdominal region of seven subjects were manually segmented to calculate subcutaneous fat (SAT) and visceral fat (VAT). The resulting segmentation maps of SAT and VAT were augmented using a template-based data augmentation approach to create a large dataset for neural network training. Neural network performance was evaluated on both sequential CT slices from three subjects and randomly selected CT images from the upper, central, and lower abdominal regions of 100 subjects.

Results:

Both subcutaneous and abdominal cavity segmentation images created by the two methods were highly comparable with an overall Dice similarity coefficient of 0.94. Pearson’s correlation coefficients between the subcutaneous and visceral fat volumes quantified using the two methods were 0.99 and 0.99 and the overall percent residual squared error were 5.5% and 8.5%. Manual segmentation of SAT and VAT on the 555 CT slices used for testing took approximately 46 hours while automated segmentation took approximately 1 minute.

Conclusion:

Our data demonstrates that deep learning methods utilizing a template-based data augmentation strategy can be employed to accurately and rapidly quantify total abdominal SAT and VAT with a small number of training images.

Keywords: Deep learning, artificial intelligence, visceral fat, obesity

INTRODUCTION

Obesity, defined as excessive fat accumulation in the body, is a growing global epidemic, with particular impact on the US population which has experienced a marked increase in obesity levels over the last 50 years (1,2). Obesity is a public health problem due to its associated increased risk for a variety of chronic diseases including metabolic syndrome, type 2 diabetes, cardiovascular diseases, and cancer (3). Anthropometric measurements such as body mass index (BMI), waist circumference, and waist-to-hip ratio have historically been used to diagnose obesity. However, these indirect measurements do not account for weight from skeletal muscle, nor do they distinguish between differential fat distributions such as visceral and subcutaneous fat. Distribution of fat is a key variable when assessing obesity as greater distribution of visceral fat has been linked to more deleterious cardiovascular outcomes than total body fat or BMI alone (46).

Computed tomography (CT) is an imaging modality that permits easy distinction between fat and other tissues and thus allows for accurate measurement of fat and non-fat tissue amounts in the body (6). Quantification of body fat volume using CT involves analysis of multiple slices across the region of interest, a laborious task if done manually, and hence not typically performed in routine CT interpretation.

Deep learning using convolutional neural networks has gained recent popularity in the literature for tackling problems in a multitude of areas, including image recognition, classification and segmentation (7). Development of deep learning algorithms relies on large cohorts of training data to identify important features of targets for predictions in new data. To simultaneously expedite the process of CT segmentation and quantification while reducing subjective influences from observers, several semi and fully-automated algorithms have been developed for quantifying body fat on CT (815). However, nearly all of the previously published algorithms are dependent on expert knowledge for tuning the features of images or focus on a single or few slices within the abdominal scan.

ANTsRNet is a collection of deep learning network architectures ported to the R language and built on the Keras framework (16). We previously applied ANTsRNet to provide a comprehensive protocol for automatically segmenting total abdominal subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) using mouse Magnetic Resonance images (17). This was accomplished through the use of a novel technique designed by the Advanced Normalization Tools team utilizing template-based data augmentation to create a large training set using a small number of images. Here, we have applied the same techniques to test the hypothesis that deep learning using template-based data augmentation could accurately and rapidly quantify total abdominal SAT and VAT on human CT images. We have also evaluated the relationship of SAT and VAT volumes with BMI in a cohort.

METHODS

CT Images. 110 unique abdominal CT scans of 110 patients (half men, half women), aged 60 ± 16 years (19–93 years), were randomly selected and retrieved through the Picture Archiving and Communication System (PACS) at the University of Virginia. No exclusion or inclusion criteria were used when selecting patients, as the goal was to develop an algorithm that could properly segment any abdominal CT image that it encountered. Scans used in this study were taken from 2008 to 2019. Patient characteristics can be seen in Table 1. The scan parameters varied among patients with a tube current of 36~249 mA, slice thicknesses of 2.82 ± 1.75 mm (1.25–5mm) and a tube voltage of 120 kV. Average BMI was 28.1 ± 0.8 kg/m2 (range: 17.2–52.9) for men and 28.8 ± 1.1 (range: 17.2–52.9) kg/m2 for women. This large variation in tube current, slice thickness, and BMI was chosen to ensure the algorithm could properly segment a diverse set of CT images. Additionally, a few studies with excessive artifacts were purposely included to test the ability to segment high-artifact images. All procedures were conducted in compliance with the Health Insurance Portability and Accountability Act and were included within an IRB approved retrospective study protocol. CT images were deidentified to protect patient identity.

TABLE 1.

Patient Characteristics

Factor Number
Male 55
Female 55
Age 60 ± 16
BMI (Male) 28.1 ± 0.8
BMI (Female) 28.8 ± 1.1

Age and BMI are expressed as Mean ± SD.

Manual segmentation and quantification. No access to semi-automatic segmentation programs was available for our use, therefore the areas corresponding to the subcutaneous fat in the abdominal cavity on each of the training CT images were manually segmented by five coauthors on this article using Image J (18). The coauthors who did the manual segmentation had no medical background, but their work was performed under the supervision of an experienced radiologist. Fat can be readily distinguished from non-fat tissues on CT images in density, shape, and location. CT images were adjusted through windowing on the PACS workstation to a gray scale at which fat was visually distinguishable from nonfat components (bone, air, background, soft, and watery tissue). A bone window was found to be optimal in distinguishing fat from non-fat tissues.

For quantification of fat, we developed an Image J-based strategy using thresholding around a static intensity window corresponding to fat on bone window-adjusted images. A flowchart explaining the steps needed to quantify total fat can be seen in Figure S1.

For segmentation of subcutaneous fat (SAT), we manually outlined the area between the skin and the abdominal muscles. Thresholding the image on specified values (82–97) and quantifying the fat area within this selection represents the SAT volume. Visceral fat (VAT) is defined as the fat within the abdominal cavity. Due to its irregular shape and extensive distribution in the abdominal cavity, VAT was difficult to manually segment. Therefore, the area corresponding to the abdominal cavity was outlined and the VAT volume was calculated though the quantification of fat within this selection. Nonfat area was determined through thresholding of the image so include all nonfat tissues (97–255).

Automatic Measurement. The creation of an automated method for measurement of abdominal fat volume consists of multiple steps, including training data preparation, template-based data augmentation, and fat quantification. We employed a strategy similar to the one previously used for segmenting and quantifying abdominal fat of mice on MR images (17). The complete flowchart of the process is shown in Figure S2.

Template-Based Data Augmentation and Training: The need for large training data sets is a known major limitation associated with development of deep learning algorithms (7). To achieve a training data set size that is sufficient for properly segmenting total and subcutaneous fat, we employed a template-based data augmentation strategy that we previously used for segmenting abdominal fat of mice on MR images (17). Multiple rounds of training were performed using an increasing number of patients until we were satisfied that the training weights could accurately segment the testing set. We aimed to include the smallest number of patients possible to highlight the power of the template-based data augmentation strategy that was used.

Six hundred and thirteen CT images covering the entire abdominal region of seven individuals were selected for training. Of the seven subjects, two females and three males had a normal BMI and one female and one male met BMI criteria for obesity. Original CT images were adjusted at the PACS to bone windows and saved as “.tiff” files. While DICOM images work as well, “.tiff” files were chosen for training and subsequent validation due to their reduced file size, the deidentification of patient information while, and similar ease of image saving. These images were converted to the Nifti (.nii. gz) format using the ANTs toolkit (https://github.com/ANTsX/ANTs). Each converted image was segmented into two contoured areas, one for SAT and one for abdominal muscle plus its encircled abdominal cavity, using the open source segmentation tool ITK-SNAP and saved as a separate segmentation image.

Training was performed using a U-net-based model and with the ANTsRNet and Keras packages for R using a Tensorflow backend, as was done previously (17).

Validation dataset: The accuracy of the deep learning-based algorithm in segmenting subcutaneous fat was validated with a group of images consisting of a combination of CT images from three full abdominal scans taken from separate subjects and randomly selected CT images for the upper, central and lower abdominal regions of 100 subjects (555 total images). The images from three full scans were chosen to validate that the algorithm could properly segment images across multiple individual scans. The images from 100 subjects were included to validate that the algorithm could properly segment images from a diverse set of images from a diverse set of individuals. Manual measurement results were used as the ground truth for comparisons with the automated measurement results. CT images were prepared as described above and subsequently input into the trained U-net. Novel segmentation images generated from the training were evaluated for accuracy in quantification of SAT and VAT using a macro developed for the Fiji package for Image J (19). The steps for quantifying subcutaneous and visceral fat using the macro are depicted in Figure S3.

Computational Time. Manual segmentation of total and subcutaneous fat on the 555 images used in testing took approximately 46 hours compared to approximately 1 minute using the deep learning-based algorithm for the same CT images.

Statistical Analysis. Comparisons were made between the automated and manual methods in quantification of visceral and subcutaneous fat volumes. The Dice metric was used to determine the similarity between a manually generated segmentation image and an automatically generated one. If two segmentations completely overlap, the Dice score is 1; it is 0 if there is no overlap. This Dice score was determined using the “Label Overlap Measures” function of the ANTs toolkit. The residual was determined from the difference between manually measured and automatically measured fat volume for each slice. In addition, Pearson’s correlation analysis was done to determine correlations between the manually and automatically generated fat volumes and between fat volumes and BMI, as reported (17). These residual and Pearson’s correlation analyses were performed in R.

RESULTS

U-net-Based Deep Learning for Subcutaneous Fat Selection. The U-net-based algorithm successfully generated the selections designating the subcutaneous fat region and the abdominal cavity across an entire abdominal scan, and were highly consistent with the manually generated selections created for the same input images (Fig 1).

Figure 1.

Figure 1.

The accuracy of deep learning in delineating areas containing subcutaneous (SAT) and visceral fat (VAT) on CT images at multiple levels. Representative images taken every 20 slices from on of the three full scans included in the validation dataset show consistency between the manual and automatic methods in segmenting SAT and VAT on CT images. The red area denotes SAT and the green area denotes VAT. Predicted segmentation: segmentation made by deep learning.

Comparison of Fat Volume Measured from Manual and Deep Learning-Based Methods. Initial validation was done on SAT volume measurements from the consecutive CT slices of three patients to determine whether the algorithm could successfully generate segmentation images across an entire abdominal scan. As shown in Figure 2a, SAT volumes measured on sequential slices were comparable between the two methods in all three patients. For each of these scans, the total SAT volumes were also comparable between the two methods (Fig 2b). The difference for scan 1 was 59,155 mm3 or 0.97% of the total volume, the difference for scan 2 was 692,520 mm3 or 5.4%, and the difference for scans 3 was 214,197 mm3 or 2.1% of the total volume. The average difference was 28,252 mm3 or 3.0% of the total volume.

Figure 2.

Figure 2.

Comparison between the automated and manual segmentation methods in quantification of SAT volumes. (a) Comparison of SAT volumes measured from sequential CT images from upper (slice 1) to lower abdominal region of three individual subjects (black = subject 1, red = subject 2, green = subject 3; solid = manual, hollow = automated). Each symbol represents an individual slice. (b) Comparison between the manual and automated methods in measurements of total SAT volumes from three individual scans. (c) Bland-Altman Plot for all images in the validation set (n = 555 images, 3 full scans + 3 images from 100 individuals). Each dot represents a single image. Orange solid line identifies the mean difference between fat volumes caluclated using the manually generated or automatically generated segmentation images (μ). Lower solid red line identifies the Bland Altman lower limit of agreement (+2σ), and upper red solid line identifies the Bland Altman upper limit of agreement (−2σ).

Performance of the algorithm was further validated with CT images from an additional 100 subjects randomly selected from the University’s PACS database. One image was randomly chosen from each of the upper, middle and lower abdominal regions of an individual. For 12 CT slices, quantification of adipose tissue compartments was impossible with the automated method because subcutaneous fat area was discontinuous or the muscle layer was incomplete. The image training did not include images in which the SAT was discontinuous, and therefore the algorithm assumes a continuous SAT layer. When a continuous SAT layer was not present, it artificially created one and over-represented the area in which the SAT was predicted to reside. A Bland-Altman plot combining the three full scans and the remaining images from the 100 individuals shows a high degree of accuracy for the predicted SAT volumes, with only 4.8% of the images being outside 2s from the mean (Fig 2c).

The volumes of VAT on sequential slices for the three full scans were also comparable between the two methods (Fig 3a). The difference between the total VAT volumes was also small (Fig 3b). The difference for scan 1 was 59,155 mm3 or 2.7%, the difference for scan 2 was 692,520 mm3 or 9.3%, and the difference for scans 3 was 214,197 mm3 or 5.3%. The average difference between fat volumes was 282,521 mm3, or 6.2%. A Bland-Altman plot combining these three scans with the additional 300 images from 100 individuals shows a high degree of accuracy for predicted VAT volumes, with 4.8% of the images being outside 2σ from the mean (Fig 3c).

Figure 3.

Figure 3.

Comparison between the automated and manual segmentation methods in quantification of VAT volumes. (a) Comparison of VAT volumes measured from sequential CT images from upper (slice 1) to lower abdominal region of three individual subjects (black = subject 1, red = subject 2, green = subject 3; solid = manual, hollow = automated). Each symbol represents an individual slice (black = subject 1, red = subject 2, green = subject 3; solid = manual, hollow = automated). (b) Comparison between the manual and automated methods in measurements of total VAT volumes from three individual scans. (c) Bland-Altman Plot for all images in the validation set (n = 555 images, 3 full scans + 3 images from 100 individuals). Each dot represents a single image. Orange solid line identifies the mean difference between fat volumes caluclated using the manually generated or automatically generated segmentation images (μ). Lower solid red line identifies the Bland Altman lower limit of agreement (+2σ), and upper red solid line identifies the Bland Altman upper limit of agreement (−2σ).

Pearson’s correlation was performed and residual squared error was calculated on fat volume measurements for both SAT and VAT for additional validation (Table 2). There was a high degree of agreement between the SAT (R2 = 0.994, p = 2.49E-217) and VAT (R2 = 0.989, p = 8.85E-193) volumes quantified from predicted or manually-generated segmentation images. The average residual squared error for SAT was 5.494% and for VAT was 8.510%, further confirming the algorithm’s ability to accurately quantifies fat from all images in an abdominal CT scan.

TABLE 2.

Validation of the Deep Learning-Based Algorithm for Accuracy in Quantitation of Subcutaneous (SAT) and Visceral Fat (VAT) on CT Images

Average d ice value SAT volume R2 SAT volume p-v alue VAT volume R2 VAT volume p-v alue Percent RSE SAT volume Percent RSE VAT Volume
All 555 validation images 0.944 ± 0.002 0.994 2.49E-217 0.989 8.85E-193 5.494 8.510

The Dice score (mean ± SE), correlation coefficients, and percent Residual Standard Error (RSE) are used for comparing the similarity of the values of subcutaneous (SAT) and visceral fat (VAT) volumes measured by the manual and deep learning methods on the same CT images.

In addition to fat volume comparisons, Dice coefficient values were calculated for the generated segmentation images to measure the level of similarity in the images themselves. The average Dice coefficient value was 0.94, suggesting a high degree of similarity in selection shape and area (min Dice = 0.80; max Dice = 0.98) (Table 2).

Correlations between Abdominal Fat Volumes and Body Mass Index (BMI). Correlations of total, SAT and VAT volumes with BMI were calculated using data from the above validation cohort. Total fat volumes for the abdominal region from the base of the lung to the pelvic brim (T12-L5) were measured using the automated method, where visceral fat is typically measured with CT (20,21). BMI was significantly correlated with total (R2 = 0.145; p < 0.001) and SAT volumes (R2 = 0.246; p < 0.001) (Figs 4ab). There was no correlation between BMI and VAT volume (R2 = 0.0134; p = 0.144; Fig 4c). Because the amounts of abdominal fat vary between individuals, fat volume was normalized by non-fat mass for all subjects to account for the influence of abdominal dimensions on individual variations in abdominal fat. After normalization with nonfat mass, total fat showed an improved association with BMI and SAT showed a reduced association with BMI based on R2 and p values (Figs 4de). No correlation was found between VAT volume and BMI (Fig 4f).

Figure 4.

Figure 4.

Correlations of fat volumes quantified from all slices in the central abdominal region with BMI in a validation cohort of 100 patients. Each point represents values of an individual subject. The correlation coefficient (R) and significance (P) are shown. a, b, c: Total, subcutaneous, and visceral fat volumes are expressed in real values calculated using the automated method. d, e, f, Total, subcutaneous, and visceral fat volumes are normalized by dividing by nonfat volume on a same CT slice.

DISCUSSION

Our study successfully applies a deep learning-based approach to the measurement of abdominal fat on CT scans that is both accurate and rapid. Volumes of visceral and subcutaneous fat measured with our algorithm have shown a high degree of consistency with those measured by the manual method, the current gold standard. By combining full abdominal scans with a template-based data augmentation strategy, we have successfully developed a unique method to quantify total SAT and VAT while training on only a small number of images. The ability to measure total body fat and fat distribution is superior to common anthropometric measurements such as BMI, waist circumference, and waist-hip ratio which cannot accurately discern adipose tissue distribution in the body. Analysis of 100 abdominal CT scans shows that BMI is moderately associated with subcutaneous fat volume but has no association with visceral fat volume.

A few deep learning-based methods to measure body fat on CT scans have been reported (1214). Compared to the previous studies, our present study overcomes several limitations. First, the inclusion of all CT slices in the abdominal region was performed as compared to one or few slices used in other studies. Fat distribution varies greatly in the abdominal region of obese subjects so obesity and overall fat content may not be accurately reflected from one or few slices. Thus, our method of including all CT slices should result in improved precision and sensitivity for detecting change over time in abdominal fat.

Second, our study employed a template-based data augmentation strategy whereby images sampled were used to construct a representative template that was optimal in terms of both shape and intensity (22). This approach permits a substantial augmentation of the training dataset and overcomes the limitation of deep learning in most cases which require large-scale training datasets. Despite fewer CT slices being used for the training, our algorithm has shown a comparable performance to previous deep learning-based methods for quantification of body fat using CT images (1214).

Manually segmenting visceral fat on numerous CT slices due to its irregular shape and extensive distribution within the abdominal cavity is challenging. However, separating fat from nonfat elements (air, background, waterish tissue, and bone) is a more straightforward task. Thus, we chose to segment the abdominal cavity and use thresholding to isolate and quantify the VAT. The chance of overestimating visceral fat volume from other fat deposits such as bone marrow fat and inter-muscular fat was minimal because the lumbar vertebrae and pelvic bones are within the abdominal and pelvic wall while visceral fat is located within the abdominal cavity. Also, vertebrae and pelvic bones are cancellous bones containing red bone marrow that have higher Hounsfield units on CT than fat.

BMI is the most widely used measure of body adiposity in clinical practice. The association of BMI with abdominal fat volumes directly measured by CT was tested in our cohort. Our results demonstrate that BMI is only moderately associated with total abdominal and subcutaneous fat and has no association with visceral fat. In studies of larger subjects, BMI has also been found to be more highly correlated with subcutaneous fat versus visceral fat (20,23,24). These results suggest that BMI is not a reliable marker of abdominal fat volumes and is a poor proxy of visceral fat.

In summary, we have demonstrated the accuracy of our deep learning based algorithm for quantifying abdominal fat on CT scans. The algorithm has markedly expedited the process of measuring abdominal fat volume allowing for potential routine reporting in the clinical setting. Moreover we demonstrate the possibility of using a relatively small dataset to effectively train a neural network to segment body fat. This has important clinical implications as machine learning can be readily applied to other regions or tissue types evaluated on medical imaging. Despite the aforementioned advances afforded by applying deep learning to this task, the biggest limitation is performance when an individual has a discontinuous subcutaneous fat layer. Another limitation is that we did not analyze the inter-observer variation due to the laborious nature of manual segmentation.

Supplementary Material

1

ACKNOWLEDGMENTS

Research reported in this publication was supported by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health under Award Number R01DK116768 and Commonwealth Health Research Board (CHRB) Virginia. Andrew Grainger is a recipient of the Robert R. Wagner Fellowship from the University of Virginia School of Medicine. The work in this manuscript has not been published partly or in full. The authors declare no competing interest.

Abbreviations:

SAT

subcutaneous fat

VAT

visceral fat

Footnotes

IRB STATEMENT

All procedures were conducted in compliance with the Health Insurance Portability and Accountability Act and were included within an IRB approved retrospective study protocol (protocol #17041)

COMPLIANCE WITH ETHICAL STANDARDS

The scientific guarantor of this publication is Weibin Shi at the University of Virginia.

SUPPLEMENTARY MATERIALS

Supplementary material associated with this article can be found in the online version at doi:10.1016/j.acra.2020.07.010.

REFERENCES

  • 1.Ogden C, Carroll M. Prevalence of Overweight, Obesity, and Extreme Obesity Among Adults: United States, Trends 1960–1962 Through 2007–2008. NCHS Data Brief. 201:1–6. [Google Scholar]
  • 2.Hales C, Carroll M, Fryar C, et al. Prevalence of obesity among adults and youth: United States . NCHS Data Brief 2017; 288:2015–2016. [PubMed] [Google Scholar]
  • 3.Tremmel M, Gerdtham UG, Nilsson PM, et al. Economic burden of obesity: a systematic literature review. Int J Env Res Public Health 2017; 14. doi: 10.3390/ijerph14040435. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Gruzdeva O, Borodkina D, Uchasova E, et al. Localization of fat depots and cardiovascular risk. Lipids Health Dis 2018; 17(1):218. doi: 10.1186/s12944-018-0856-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.St-Pierre J, Lemieux I, Vohl MC, et al. Contribution of abdominal obesity and hypertriglyceridemia to impaired fasting glucose and coronary artery disease. Am J Cardiol 2002; 90:15–18. [DOI] [PubMed] [Google Scholar]
  • 6.Chan JM, Rimm EB, Colditz GA, et al. Obesity, fat distribution, and weight gain as risk factors for clinical diabetes in men. Diabetes Care 1994; 17:961–969. [DOI] [PubMed] [Google Scholar]
  • 7.McBee MP, Awan OA, Colucci AT, et al. Deep learning in radiology. Acad Radiol 2018. Published online. [DOI] [PubMed] [Google Scholar]
  • 8.Seabolt LA, Welch EB, Silver HJ. Imaging methods for analyzing body composition in human obesity and cardiometabolic disease. Ann N Acad Sci 2015; 1353:41–59. [DOI] [PubMed] [Google Scholar]
  • 9.Positano V, Gastaldelli A, Sironi AM, et al. An accurate and robust method for unsupervised assessment of abdominal fat by MRI. J Magn Reson Imaging 2004; 20:684–689. [DOI] [PubMed] [Google Scholar]
  • 10.Demerath EW, Ritter KJ, Couch WA, et al. Validity of a new automated software program for visceral adipose tissue estimation. Int J Obes 2007; 31:285–291. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kullberg J, Angelhed JE, Lonn L, et al. Whole-body T1 mapping improves the definition of adipose tissue: consequences for automated image analysis. J Magn Reson Imaging 2006; 24:394–401. [DOI] [PubMed] [Google Scholar]
  • 12.Weston AD, Korfiatis P, Kline TL, et al. Automated abdominal segmentation of CT scans for body composition analysis using deep learning. Radiology 2019; 290:669–679. doi: 10.1148/radiol.2018181432. [DOI] [PubMed] [Google Scholar]
  • 13.Commandeur F, Goeller M, Betancur J, et al. Deep Learning for quantification of epicardial and thoracic adipose tissue from non-contrast CT. IEEE Trans Med Imaging 2018; 37:1835–1846. doi: 10.1109/TMI.2018.2804799. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wang Y, Qiu Y, Thai T, et al. A two-step convolutional neural network based computer-aided detection scheme for automatically segmenting adipose tissue volume depicting on CT images. Comput Methods Programs Biomed 2017; 144:97–104. doi: 10.1016/j.cmpb.2017.03.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Park HJ, Shin Y, Park J, et al. Development and validation of a deep learning system for segmentation of abdominal muscle and fat on computed tomography. Korean J Radiol 2020; 21:88–100. doi: 10.3348/kjr.2019.0470. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Tustison NJ, Avants BB, Lin Z, et al. Convolutional neural networks with template-based data augmentation for functional lung image quantification. Acad Radiol 2019; 26:412–423. doi: 10.1016/j.acra.2018.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Grainger AT, Tustison NJ, Qing K, et al. Deep learning-based quantification of abdominal fat on magnetic resonance images. PLoS One 2018; 13:e0204071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.NIH Image to ImageJ. 25 years of image analysis. 2020. Accessed March 19 https://www.ncbi.nlm.nih.gov/pubmed/22930834. [DOI] [PMC free article] [PubMed]
  • 19.Schindelin J, Arganda-Carreras I, Frise E, et al. Fiji: an open-source platform for biological-image analysis. Nat Methods 2012; 9:676–682. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Snell-Bergeon JK, Hokanson JE, Kinney GL, et al. Measurement of abdominal fat by CT compared to waist circumference and BMI in explaining the presence of coronary calcium. Int J Obes Relat Metab Disord J Int Assoc Study Obes 2004; 28:1594–1599. doi: 10.1038/sj.ijo.0802796. [DOI] [PubMed] [Google Scholar]
  • 21.Ryo M, Kishida K, Nakamura T, et al. Clinical significance of visceral adiposity assessed by computed tomography: A Japanese perspective. World J Radiol 2014; 6:409–416. doi: 10.4329/wjr.v6.i7.409. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Avants BB, Yushkevich P, Pluta J, et al. The optimal template effect in hippocampus studies of diseased populations. NeuroImage 2010; 49:2457–2466. doi: 10.1016/j.neuroimage.2009.09.062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Camhi SM, Bray GA, Bouchard C, et al. The relationship of waist circumference and BMI to visceral, subcutaneous, and total body fat: sex and race differences. Obes Silver Spring Md 2011; 19:402–408. doi: 10.1038/oby.2010.248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Nattenmueller J, Hoegenauer H, Boehm J, et al. CT-based compartmental quantification of adipose tissue versus body metrics in colorectal cancer patients. Eur Radiol 2016; 26:4131–4140. doi: 10.1007/s00330-016-4231-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Potretzke AM, Schmitz KH, Jensen MD. Preventing overestimation of pixels in computed tomography assessment of visceral fat. Obes Res 2004; 12:1698–1701. doi: 10.1038/oby.2004.210. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES