Skip to main content
Radiology: Cardiothoracic Imaging logoLink to Radiology: Cardiothoracic Imaging
. 2019 Dec 19;1(5):e190057. doi: 10.1148/ryct.2019190057

Left Atrial Volume as a Biomarker of Atrial Fibrillation at Routine Chest CT: Deep Learning Approach

Alex Bratt 1,, Zachary Guenther 1, Lewis D Hahn 1, Michael Kadoch 1, Patrick L Adams 1, Ann NC Leung 1, Haiwei H Guo 1
PMCID: PMC7977801  PMID: 33778529

Abstract

Purpose

To test the performance of a deep learning (DL) model in predicting atrial fibrillation (AF) at routine nongated chest CT.

Materials and Methods

A retrospective derivation cohort (mean age, 64 years; 51% female) consisting of 500 consecutive patients who underwent routine chest CT served as the training set for a DL model that was used to measure left atrial volume. The model was then used to measure atrial size for a separate 500-patient validation cohort (mean age, 61 years; 46% female), in which the AF status was determined by performing a chart review. The performance of automated atrial size as a predictor of AF was evaluated by using a receiver operating characteristic analysis.

Results

There was good agreement between manual and model-generated segmentation maps by all measures of overlap and surface distance (mean Dice = 0.87, intersection over union = 0.77, Hausdorff distance = 4.36 mm, average symmetric surface distance = 0.96 mm), and agreement was slightly but significantly greater than that between human observers (mean Dice = 0.85 [automated] vs 0.84 [manual]; P = .004). Atrial volume was a good predictor of AF in the validation cohort (area under the receiver operating characteristic curve = 0.768) and was an independent predictor of AF, with an age-adjusted relative risk of 2.9.

Conclusion

Left atrial volume is an independent predictor of the AF status as measured at routine nongated chest CT. Deep learning is a suitable tool for automated measurement.

© RSNA, 2019

See also the commentary by de Roos and Tao in this issue.


Summary

Left atrial volume is an independent predictor of atrial fibrillation at routine nongated chest CT (with or without contrast material) and can be measured rapidly and reproducibly by a deep learning model.

Key Points

  • ■ Left atrial enlargement as measured at routine nongated chest CT is an independent predictor of atrial fibrillation in a broad patient cohort, demonstrating an age-adjusted relative risk of 2.9 for atrial volume greater than 199 cm3.

  • ■ Deep learning is a rapid, accurate, and consistent tool for volumetric measurement of the left atrium, performing left atrial segmentation in a mean of 1.39 seconds with an agreement between model- and manually generated segmentations similar to that between trained observers (Dice = 0.85 vs 0.84, respectively).

Introduction

Atrial fibrillation (AF) is a common arrhythmia that is estimated to be affecting more than 30 million people globally (1), and it carries substantial risk of cerebral infarction and death (25). Although these risks can be reduced with appropriate therapy, AF is commonly occult, preventing timely diagnosis and intervention (6). Currently, detection of AF relies on symptomatic or incidentally identified cases, which make up only half of total patients with the disease (7). Asymptomatic individuals are not routinely screened for AF, although there are at least two ongoing large-scale clinical trials examining the appropriateness of screening in high-risk populations (6).

Standard chest CT imaging routinely captures information about the size and structure of the left atrium (LA), although this is rarely reported in the absence of a gross abnormality. However, it is known that LA enlargement as determined with echocardiography confers a fourfold increased risk of AF and an increased stroke risk (810). Thus, there are likely many missed opportunities to intervene in patients who have undergone chest CT for unrelated reasons, such as respiratory symptoms or lung cancer screening, and who could benefit from treatment of previously undiagnosed or undertreated AF.

A primary reason for this missed opportunity is that volumetric measurement, which is a better estimate of true chamber size than uni- or bidimensional measurements (11,12), is time-intensive and impractical in many busy radiology practices. This provides a rationale for an automated approach that reduces the time and effort required for quantitative LA volume assessment. In this study, we evaluated LA volume as a biomarker for AF as measured at routine chest CT, testing the hypothesis that enlargement is associated with clinically diagnosed AF. We have demonstrated a deep learning (DL) model for automated assessment of LA volume from nongated chest CT examinations performed with or without intravenous contrast material and compared the performance of the DL model against manual measurement.

Materials and Methods

This research protocol was performed with the supervision of our institutional review board, which approved the retrospective analysis of pre-existing data sets and waived the requirement for informed consent.

Data

A retrospective derivation cohort was identified that consisted of 500 consecutive patients (inpatients and outpatients) undergoing routine nonangiographic standard-of-care chest CT examinations performed at our institution between July 27 and August 29, 2018. DICOM files for each scan were retrieved from PACS following institutional review board approval (31 393 unique images). The only inclusion criterion was availability of 5-mm axial reconstructions, which served as the basis for quantitative analysis. This was chosen to provide broad applicability, as 5-mm axial slices are the most commonly used imaging plane and slice thickness across institutions. Both contrast material–enhanced and non–contrast-enhanced studies were utilized.

Each CT scan in the derivation cohort was manually segmented by one of three board-eligible cardiothoracic radiology fellows (reader 1, A.B.; reader 2, Z.G.; and reader 3, L.D.H.); this entailed labeling pixels within each CT study as either nonbackground (ie, LA) or background using 3D Slicer (13,14), an established quantitative medical image postprocessing suite. LA volumes were calculated by multiplying the number of nonbackground voxels by the voxel dimensions. Interrater reproducibility between two randomly chosen observers (reader 1 and reader 3) was determined by analysis of the first 50 studies segmented by reader 3. Dice scores between readers and those between the automated model and manual segmentation were compared only over the 50-study subset. Per-case segmentation times were recorded for this subset to compare with those of automated segmentation.

A validation cohort was identified that consisted of 500 consecutive patients imaged between February 1 and March 12, 2012 (34 554 unique images). There was an intentional approximately 6-year gap between cohorts to demonstrate temporal generalizability of the model. Scanners and scanning parameters were qualitatively similar between the two cohorts. Patients in the validation cohort were labeled as having AF if they had a diagnosis at the time of scanning, as determined by an electronic medical chart review (EPIC, Verona, Wis). The following search terms were used to assess for clinically diagnosed AF within the medical chart: “atrial,” “fibrillation,” “atrial fibrillation,” and “afib.” AF status was subclassified into paroxysmal (ie, AF that terminates within 7 days of onset), persistent (AF that does not terminate within 7 days), and unspecified. The available electrocardiographic studies of 85% (336 of 393) of patients without a history of AF were reviewed to verify clinical diagnosis.

Automated Segmentation Model

A model was developed to automatically segment the LA on CT images (Fig 1) that consisted of a convolutional neural network based on the well-known U-Net architecture (15). Customizations were applied to improve performance, including the incorporation of residual modules (16), which facilitate gradient flow during training. Instance normalization was used instead of batch normalization because of a relatively small batch size and on the basis of empirical evidence from prior experience. A loss function was employed using weighted softmax and cross-entropy, with a class weight of 0.1 for background and 0.9 for nonbackground (ie, LA) pixels. RMSProp was used to apply incremental parameter updates to the network. Automated segmentations were generated for each case in the derivation cohort by eightfold cross-validation, using manual segmentations as ground truth. The derivation cohort in its entirety was used as a training set for testing on the validation cohort.

Figure 1:

Figure 1:

A, Network architecture schematic and, B, an exploded view of network modules. Design of the network was inspired by prior success of U-Net architectures in medical image segmentation tasks. Residual modules were used to facilitate gradient flow during training and to prevent exploding/vanishing gradients. Horizontal lines between the contracting and expanding pathways of the network in A represent concatenation. Conv = convolution, Txp Conv = transposed convolution, Instance Norm = instance normalization, ReLU = rectified linear unit.

The automated model accepts two-dimensional axial CT image sections as input and generates two-dimensional segmentation maps as output. All input images were rescaled to 256 × 256 pixels at batch time. Aggressive data augmentation in the form of random zoom, rotation, 224 × 224 crop, window and level, and addition of Gaussian noise was used. To generate a segmentation map for the three-dimensional image volume, each axial section of the volume was fed sequentially into the model, and the resultant two-dimensional segmentation maps were concatenated along the z axis.

The model was built in Python by using the DL framework PyTorch (17). Training and testing were performed on a workstation with eight CPU cores, 64 GB of system memory, and a graphics processing unit with 24 GB of video memory (RTX Titan; NVIDIA, Santa Clara, Calif). Software code pertaining to both training and testing of the model can be found online at https://github.com/akbratt/AF.

Statistical Methods

All manual and model-generated segmentations were compared by using established measures of overlap and surface distance, including Dice score, intersection over union, Hausdorff distance, and average symmetric surface distance. Comparisons between groups were made by using the Mann-Whitney rank test for continuous variables, and a two-sided P value less than .05 was considered to indicate a statistically significant difference. Inter- and intraobserver agreement between methods was assessed by using the Bland-Altman method (18), which yielded a mean difference and limits of agreement between methods (mean ± 1.96 standard deviation). Bivariate correlation coefficients were used to evaluate associations between variables. Pooled risk ratios were calculated by using the Cochran-Mantel-Haenszel method to adjust for confounding variables. Diagnostic performance was evaluated by means of a receiver operating characteristic curve analysis, with area under the receiver operating characteristic curve as the primary metric. Statistical calculations were performed by using the Python packages, NumPy (19), SciPy (20), Pandas (21), and StatsModels (22).

Results

Derivation Cohort, Volume Overlap, and Surface Distance Analysis

Training was allowed to proceed for 200 000 images per fold during cross-validation. Mean segmentation time was 1.39 seconds per case for the automated model, as compared with 111 seconds for manual tracing (P < .001) (Fig 2). Figure 3, A and B show representative manual versus model-generated segmentations. There was good agreement between manual and model-generated segmentation maps within the derivation cohort by all measures of volume overlap and surface distance (mean Dice = 0.87, intersection over union = 0.77, Hausdorff distance = 4.36 mm, and average symmetric surface distance = 0.96 mm). Agreement between manual and model-generated maps was slightly but significantly greater than that between observers (mean Dice = 0.85 [automated] vs 0.84 [manual]; P = .004). Figure 4 shows a Bland-Altman analysis of the difference between automated and manual segmentation in terms of LA volume. As shown, in 98% (n = 491) of cases, differences were 40 cm3 or less. Of the nine cases with discrepancies greater than 40 cm3, large space-occupying lesions distorting normal anatomy (n = 5) and discrepant boundary choices between LA and adjacent structures (eg, pulmonary veins; n = 4) were responsible (Fig 3, C).

Figure 2:

Figure 2:

Graph shows segmentation time between manual versus automated segmentation. As shown, per-case segmentation time was nearly 80-fold lower for the deep learning model as compared with manual tracing.

Figure 3:

Figure 3:

Representative examples of manual versus model-generated (auto) segmentations for, A, contrast-enhanced and, B, non–contrast-enhanced CT studies. In the vast majority of cases, qualitative agreement was excellent, with minimal visual discrepancy. A rare discrepant example is shown in, C, in a case with a large mediastinal mass.

Figure 4:

Figure 4:

Graph shows results of a Bland-Altman analysis. The y-axis represents the difference (between auto and manual volumes), and the x-axis, the manually generated volumes for a given sample. Middle solid line and flanking dashed lines = means ± 1.96 standard deviation, respectively.

Validation Cohort and LA Size as a Predictor of AF

Prevalence of clinical features in validation cohort patients with and those without a history of AF is compared in Table 1. As can be seen, all clinical features were significantly associated with AF status except whether contrast material was administered for the CT chest examination. Age, sex, and prevalence of AF were compared between the validation and derivation cohorts, and of these, only age (mean age, 60.7 vs 64.0 years, respectively; P = .001) was significantly different, likely stemming from shifting local demographics between 2012 and 2018. Patient characteristics were otherwise similar. A different model instance was trained for 250 000 images on the entire derivation cohort before testing on the validation cohort. Predictive performance of the automated model on the validation cohort is demonstrated in Figure 5. As shown, area under the receiver operating characteristic curve for atrial volume as a predictor of AF status was 0.768 (95% confidence interval [CI]: 0.71, 0.82), as compared with an area under the receiver operating characteristic curve of 0.741 (95% CI: 0.68, 0.80) for manual volumetry on the derivation cohort. The optimal volume threshold to maximize Youden index was 117 cm3, although we preferred the more specific (and therefore more clinically useful) volume threshold of 199 cm3 (mean ± 2; n = 22). The only clinical factors (Table 1) that were significantly correlated with AF status according to multiple linear regression analysis were hypertension, valvular disease, and atrial volume (P = .003, P = .005, and P < .001, respectively). The age-adjusted relative risk of AF was 2.9 (95% CI: 2.6, 3.2) for patients with LA volume greater than 199 cm3. Relative risk was 3.8 (95% CI: 2.8, 5.3) without adjusting for age. Table 2 shows specificity and sensitivity over a range of volume thresholds. Of the patients with AF (n = 107), 67 were classified as having paroxysmal AF, 14 as having persistent AF, and 26 as having indeterminate AF. LA volume was significantly larger for patients with AF than for those without AF (mean volume, 145 vs 99 cm3; P < .001). LA volume was also larger for patients with persistent AF than for patients with paroxysmal AF (mean volume, 196 vs 138 cm3; P < .01). Patients with a history of negativity for AF who had available electrocardiographic studies demonstrated no significant atrial size difference compared with patients whose electrocardiographic studies were unavailable for review (P = .18).

Table 1:

Prevalence of Clinical Features versus AF Status in the Validation Cohort

graphic file with name ryct.2019190057.tbl1.jpg

Figure 5:

Figure 5:

Graphs show performance of atrial size as a predictor of atrial fibrillation (afib). Left: Results of a receiver operating characteristic (ROC) analysis, with area under the ROC curve (AUC) = 0.768. Right: Histogram of left atrial size for patients with (blue) and without (red) clinically diagnosed atrial fibrillation shows a shift toward larger atria for patients with atrial fibrillation. Inset boxplots show atrial fibrillation status versus atrial size.

Table 2:

Specificity, Sensitivity, and Balanced Accuracy for Atrial Fibrillation over a Range of Volume Thresholds

graphic file with name ryct.2019190057.tbl2.jpg

Discussion

In this study, we demonstrated that LA enlargement at routine chest CT (performed with or without contrast material) is an independent predictor of AF in a broad, consecutive-patient cohort. Effect size was substantial, with an age-adjusted relative risk of 2.9, exceeding previously reported risks related to palpitations (relative risk = 2.8), hypertension (relative risk = 2.3), diabetes mellitus (relative risk = 1.8), and obesity (relative risk = 1.4) (23). In our data, a threshold volume of 200 cm3 was highly specific (specificity = 0.98) for AF.

There are echocardiographic and gated CT data to suggest that atrial enlargement precedes electrophysiologic and/or clinical manifestations (10,24). In a future study, we intend to confirm these findings at routine nongated chest CT, enabling early identification of high-risk patients in broader cohorts who may not have reason to be evaluated with echocardiography or gated cardiac CT, such as those undergoing lung cancer screening. To the extent that AF can be identified before it is detected clinically, appropriate treatment can be initiated, which may reduce the substantial morbidity and mortality associated with this condition. LA size is of importance to radiologists and ordering physicians because it is an important biomarker in many thoracic disease states, such as pulmonary hypertension (25), heart failure (26), and pulmonary embolism (27). Thus, an incidental finding of enlarged LA at chest CT could initiate several meaningful clinical algorithms, including monitoring for paroxysmal AF by using a traditional loop recorder or one of the many increasingly available wearable biometric devices, such as the Apple watch (Apple, Cupertino, Calif) (28).

Recognizing that manual segmentation is time intensive and difficult to implement in many radiology settings, we demonstrated a fully automated LA segmentation model that performs volumetric assessment rapidly (80 times faster than manual tracing) and consistently, without the need for human intervention. The model is accurate in terms of volume overlap and surface distance with regard to manual segmentation, with an agreement similar to that between human raters. Moreover, the model can be trained and executed with computer hardware that is accessible to most medical practices and presents an opportunity for integration with modern PACS image interpretation software.

The segmentation model is based on DL, a technique that has gained considerable attention in recent years for its ability to train neural networks with unprecedented performance in perceptual and generative domains. DL has now been successfully demonstrated in many subspecialties within radiology (29), although to date it has been mostly used as a research tool. The thorax is an area of particular focus for those applying DL to medical imaging. For example, a 2017 online competition awarded $1 million in prizes to participants who could build the most accurate pulmonary nodule detection model for CT (30). Additionally, one of the largest and most well-known training data sets for DL in medical imaging consists of chest radiographs (31). Much attention has been paid to DL applications within cardiovascular imaging, likely because of the field’s reliance on painstaking quantitative measurements, which provide attractive targets for automation (32). However, to the best of our knowledge, prior work has not used DL to predict AF at routine chest CT.

One of the key advantages of our approach is that neural networks are deterministic, meaning that they always produce the same output for a given input. Contrast this with human judgment and perception, which are stochastic, being dependent on unpredictable factors such as attention, fatigue, and subjective interrater variation. We believe that this provides a strong rationale for an automated approach to LA segmentation in addition to the obvious benefits relating to speed and accuracy. The added consistency afforded by neural networks enables predictable performance that can be applied at scales inaccessible with manual segmentation.

Limitations of this study included the following: First, the training data set was relatively small, particularly compared with data sets available in other industries such as social media, internet search, and language translation; more data would thus likely improve performance of the automated model and increase agreement between manual and model-generated segmentations. That being said, agreement of the model with manual segmentations was similar to interrater agreement, providing evidence that the data set size was adequate for the task. Second, model architectural and hyperparameter choices were made rationally but not through exhaustive search. Performance would likely marginally improve with more rigorous optimization, although this would require considerable additional resources. Third, we used segmentations manually generated by a single radiologist as the ground truth, which relied on subjective judgment and were susceptible to intra- and interrater variability. We intend to study the impact of using consensus labeling and compare relative performance of a consensus model versus single-radiologist segmentation. Fourth, determination of AF status was done by review, which may still fail to capture occult disease and may reduce the number of positive AF cases. Finally, this study reports on a population of patients seeking treatment at a tertiary-care medical center, for whom demographics and population statistics do not necessarily reflect those of the broader community. As a result, predictive power may vary when applied in different care settings. Prospective validation in a diverse patient population would be necessary to demonstrate that performance of the model translates to broadly improved outcomes.

In conclusion, LA volume is an independent predictor of AF in patients undergoing routine chest CT. We demonstrated an automated approach to volumetric measurement that is comparable with human performance and manyfold superior in speed. This study motivates future research using automated DL approaches to extract additional useful information from medical images that would be infeasible to measure manually.

Disclosures of Conflicts of Interest: A.B. disclosed no relevant relationships. Z.G. disclosed no relevant relationships. L.D.H. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: is a consultant for Arterys, has received an RSNA Fellow grant. Other relationships: disclosed no relevant relationships. M.K. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: is on the speakers bureau of Boehringer Ingelheim Pharmaceuticals for idiopathic pulmonary fibrosis. Other relationships: disclosed no relevant relationships. P.L.A. disclosed no relevant relationships. A.N.C.L. disclosed no relevant relationships. H.H.G. disclosed no relevant relationships.

Abbreviations:

AF
atrial fibrillation
CI
confidence interval
DL
deep learning
LA
left atrium

References

  • 1.Chugh SS, Havmoeller R, Narayanan K, et al. Worldwide epidemiology of atrial fibrillation: a global burden of disease 2010 study. Circulation 2014;129(8):837–847. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Benjamin EJ, Wolf PA, D’Agostino RB, Silbershatz H, Kannel WB, Levy D. Impact of atrial fibrillation on the risk of death: the Framingham Heart Study. Circulation 1998;98(10):946–952. [DOI] [PubMed] [Google Scholar]
  • 3.Christiansen CB, Gerds TA, Olesen JB, et al. Atrial fibrillation and risk of stroke: a nationwide cohort study. Europace 2016;18(11):1689–1697. [DOI] [PubMed] [Google Scholar]
  • 4.Freedman B, Potpara TS, Lip GYH. Stroke prevention in atrial fibrillation. Lancet 2016;388(10046):806–817. [DOI] [PubMed] [Google Scholar]
  • 5.Gladstone DJ, Spring M, Dorian P, et al. Atrial fibrillation in patients with cryptogenic stroke. N Engl J Med 2014;370(26):2467–2477. [DOI] [PubMed] [Google Scholar]
  • 6.Keach JW, Bradley SM, Turakhia MP, Maddox TM. Early detection of occult atrial fibrillation and stroke prevention. Heart 2015;101(14):1097–1102. [DOI] [PubMed] [Google Scholar]
  • 7.Hindricks G, Piorkowski C, Tanner H, et al. Perception of atrial fibrillation before and after radiofrequency catheter ablation: relevance of asymptomatic arrhythmia recurrence. Circulation 2005;112(3):307–313. [DOI] [PubMed] [Google Scholar]
  • 8.Tiwari S, Schirmer H, Jacobsen BK, et al. Association between diastolic dysfunction and future atrial fibrillation in the Tromsø Study from 1994 to 2010. Heart 2015;101(16):1302–1308. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Tiwari S, Løchen M-L, Jacobsen BK, et al. CHA2DS2-VASc score, left atrial size and atrial fibrillation as stroke risk factors in the Tromsø Study. Open Heart 2016;3(2):e000439. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Tsang TSM, Barnes ME, Bailey KR, et al. Left atrial volume: important risk marker of incident atrial fibrillation in 1655 older men and women. Mayo Clin Proc 2001;76(5):467–475. [DOI] [PubMed] [Google Scholar]
  • 11.Aviram G, Rozenbaum Z, Ziv-Baran T, et al. Identification of pulmonary hypertension caused by left-sided heart disease (World Health Organization Group 2) based on cardiac chamber volumes derived from chest CT imaging. Chest 2017;152(4):792–799. [DOI] [PubMed] [Google Scholar]
  • 12.Fu M, Zhou D, Tang S, Zhou Y, Feng Y, Geng Q. Left atrial volume index is superior to left atrial diameter index in relation to coronary heart disease in hypertension patients with preserved left ventricular ejection fraction. Clin Exp Hypertens 2019;0(0):1–7. [DOI] [PubMed] [Google Scholar]
  • 13.Kikinis R, Pieper SD, Vosburgh KG. 3D Slicer: a platform for subject-specific image analysis, visualization, and clinical support. In: Jolesz F, ed. Intraoperative imaging and image-guided therapy. New York, NY: Springer, 2014; 277–289. [Google Scholar]
  • 14.3D Slicer. https://www.slicer.org/. Accessed January 13, 2018.
  • 15.Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical image computing and computer-assisted intervention – MICCAI 2015. MICCAI 2015. Lecture notes in computer science, vol 9351. Cham, Switzerland: Springer, 2015; 234–241. [Google Scholar]
  • 16.He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. arXiv:151203385 [cs] [preprint]. http://arxiv.org/abs/1512.03385. Posted 2015. Accessed January 13, 2018.
  • 17.Paszke A, Gross S, Chintala S, et al. Automatic differentiation in PyTorch. NIPS-W. 2017. [Google Scholar]
  • 18.Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986;1(8476):307–310. [PubMed] [Google Scholar]
  • 19.Oliphant TE. Guide to NumPy. 2nd ed. USA: CreateSpace Independent Publishing Platform, 2015. [Google Scholar]
  • 20.Jones E, Oliphant T, Peterson P, et al. SciPy: Open source scientific tools for Python. http://www.scipy.org/. Published 2001. Accessed January 1, 2019.
  • 21.McKinney W. Data structures for statistical computing in Python. In: Walt S van der, Millman J, eds. Proceedings of the 9th Python in Science Conference, 2010; 51–56. [Google Scholar]
  • 22.Seabold S, Perktold J. Statsmodels: econometric and statistical modeling with python. 9th Python in Science Conference, 2010. [Google Scholar]
  • 23.Krahn AD, Manfreda J, Tate RB, Mathewson FAL, Cuddy TE. The natural history of atrial fibrillation: incidence, risk factors, and prognosis in the Manitoba Follow-Up Study. Am J Med 1995;98(5):476–484. [DOI] [PubMed] [Google Scholar]
  • 24.Mahabadi AA, Lehmann N, Kälsch H, et al. Association of epicardial adipose tissue and left atrial size on non-contrast CT with atrial fibrillation: the Heinz Nixdorf Recall Study. Eur Heart J Cardiovasc Imaging 2014;15(8):863–869. [DOI] [PubMed] [Google Scholar]
  • 25.Jivraj K, Bedayat A, Sung YK, et al. Left atrium maximal axial cross-sectional area is a specific computed tomographic imaging biomarker of World Health Organization Group 2 pulmonary hypertension. J Thorac Imaging 2017;32(2):121–126. [DOI] [PubMed] [Google Scholar]
  • 26.Kalra A, Samim A, Kalra A, et al. A CT based measurement of left atrial area predicts an echocardiographic assessment of left atrial volume and left ventricular function in heart failure. Chest 2013;144(4 Supplement):168A. [Google Scholar]
  • 27.Aviram G, Soikher E, Bendet A, et al. Prediction of mortality in pulmonary embolism based on left atrial volume measured on CT pulmonary angiography. Chest 2016;149(3):667–675. [DOI] [PubMed] [Google Scholar]
  • 28.Turakhia MP, Desai M, Hedlin H, et al. Rationale and design of a large-scale, app-based study to identify cardiac arrhythmias using a smartwatch: the Apple Heart Study. Am Heart J 2019;207:66–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60–88. [DOI] [PubMed] [Google Scholar]
  • 30.Data Science Bowl. https://kaggle.com/c/data-science-bowl-2017. Published 2017. Accessed January 15, 2019.
  • 31.Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. ChestX-Ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017; 3462–3471.
  • 32.Bernard O, Lalande A, Zotti C, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans Med Imaging 2018;37(11):2514–2525. [DOI] [PubMed] [Google Scholar]

Articles from Radiology: Cardiothoracic Imaging are provided here courtesy of Radiological Society of North America

RESOURCES