Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Mar 16.
Published in final edited form as: J Thorac Imaging. 2019 May;34(3):192–201. doi: 10.1097/RTI.0000000000000385

Machine Learning and Deep Neural Networks in Thoracic and Cardiovascular Imaging

Tara A Retson 1, Alexandra H Besser 1, Sean Sall 2, Daniel Golden 2, Albert Hsiao 1
PMCID: PMC7962152  NIHMSID: NIHMS1669386  PMID: 31009397

Summary

Advances in technology have always had the potential and opportunity to shape the practice of medicine, and in no medical specialty has technology been more rapidly embraced and adopted than radiology. Machine learning and deep neural networks promise to transform the practice of medicine, and in particular the practice of diagnostic radiology. These technologies are evolving at a rapid pace due to innovations in computational hardware and novel neural network architectures. Several cutting-edge post-processing analysis applications are actively being developed in the fields of thoracic and cardiovascular imaging, including applications for lesion detection and characterization, lung parenchymal characterization, coronary artery assessment, cardiac volumetry and function, and anatomic localization.

Cardiothoracic and cardiovascular imaging lies at the technological forefront of radiology due to a confluence of technical advances. Enhanced equipment has enabled CT and MRI scanners that can safely capture images that freeze the motion of the heart to exquisitely delineate fine anatomical structures. Computing hardware developments have enabled an explosion in computational capabilities and in data storage. Progress in software and fluid mechanical models are enabling complex 3D and 4D reconstructions to not only visualize and assess the dynamic motion of the heart, but also quantify its blood flow and hemodynamics. And now, innovations in machine learning, particularly in the form of deep neural networks, are enabling us to leverage the increasingly massive data repositories that are prevalent in the field.

Here, we discuss developments in machine learning techniques and deep neural networks to highlight their likely role in future radiological practice, both in and outside of image interpretation and analysis. We discuss the concepts of validation, generalizability and clinical utility as they pertain to this and other new technologies, and we reflect upon the opportunities and challenges of bringing these into daily use.

Cardiothoracic Imaging at the Forefront of Technological Advances

From its inception, radiology has been a discipline filled with innovators and pioneers, who have ambitiously sought to solve challenging clinical problems with new technology. From the earliest application of x-rays for to diagnosis and intervention, to the advent of computed tomography (CT) and magnetic resonance imaging (MRI), to 3D post-processing and computer-aided detection (CAD), the specialty has often been the first to embrace innovations that allow efficient diagnosis of disease to improve the care of patients. Increasingly advanced CT and MRI scanners are capable of capturing images that freeze heart motion and exquisitely delineate fine anatomical structures at a level of detail never before seen outside of the operating room. In addition, advances in software and computational fluid mechanical models are enabling complex 3D and 4D reconstructions to visualize and assess the dynamic motion of the heart and quantify blood flow and hemodynamics (Figure 1). Due to its intrinsic dynamic and anatomical complexities, cardiothoracic imaging is poised at the forefront of transformative technologies as machine learning and deep neural networks begin to revolutionize medicine and image analysis.

Figure 1.

Figure 1.

Static images from a cardiac-gated 4D MRA (left) and 4D Flow (right), showing congenital branch pulmonary artery stenosis and enlarged intercostal collateral arteries supplying the left lower lobe. New technologies such as this provide increasingly detailed understanding of pathophysiology, for greater patient-specific insights.

Machine Learning and the Advancements of Deep Neural Networks

Since the development of the earliest computers there has always been an interest in utilizing these systems to automate tasks. Starting with simple numerical calculation, computer programs quickly progressed in complexity. Modern high-level languages such as Python abstract away computational intricacies and have made it simpler to develop algorithms to ingest data and subsequently use them to make predictions. Commonly used supervised (using input and output pairs) and unsupervised (using only inputs) machine learning methods were developed and quickly expanded to include linear and logistic regression, tree-based methods such as random forests and gradient boosted trees, kernel-based methods such as support vector machines (SVM), and clustering methods like k-means and E-M clustering1 (see Figure 2). Traditional machine learning methods share a common limitation – they require features to be specified in advance. The need for computational features meant that highly specialized computer scientists were required to conceptualize relevant features and manually translate them into code to be used as input for the machine learning algorithms, a process sometimes referred to as “feature engineering.” These earlier methods were therefore somewhat inaccessible as they required a considerable amount of time and expertise to train and fine-tune. In contrast, neural networks can automatically learn input features from the data, alleviating much of the time and expertise necessary for training and fine-tuning.

Figure 2.

Figure 2.

Traditional machine learning algorithms are categorized into “unsupervised” learning, requiring input data only, and “supervised” learning, requiring input and output data. Deep neural networks, a form of supervised learning, comprise the most recent advances in machine learning.

The field of artificial neural networks developed in parallel to traditional machine learning methods, albeit with a rockier history. The first precursor to current neural networks was the perceptron, a “brain model” for supervised learning of binary classifiers, developed in 19582. Despite the initial excitement this generated, the perceptron was not capable of solving some basic mathematical problems such as the exclusive-or (XOR), thus leading users to realize that it was not a universal solution. This led to several decades of an “AI winter” in which neural network development effectively came to a halt3. Neural network research remained relatively dormant until the work of Rumelhart et al. in 1986, which demonstrated that networks of neuron-like units could effectively be trained using gradient descent and back-propagation techniques4. This discovery, along with a demonstrated ability for these networks to learn any function, revived the field of neural network research5.

Through the 1990s and early 2000s, several breakthroughs were made. Neural networks were used to recognize handwritten digits, and pre-training with unspecified features enabled deep neural networks with multiple layers6,7. Progress was, however, intermittent and hindered by the large amount of data and computing resources needed to train these networks. A considerable breakthrough came in 2012 when Krizhevsky et al. trained a large, deep convolutional neural network to classify objects in the ImageNet LSVRC contest and won by a significant margin8,9. The victory was attributed to a number of network architectural components, but also the use of GPUs to accelerate network training with access to the large ImageNet dataset (1.2 million images). Since then, there has been an explosion of neural network research (examples of simple architectures are shown in Figure 3). Much attention has been directed to building larger network architectures up to hundreds of layers deep and novel ways to train them successfully10-12. In parallel, convolutional network architectures have also been created for tasks including object detection, localization, and segmentation13,14.

Figure 3.

Figure 3.

Layout of an example (A) multilayer neural network and (B) deep convolutional neural network, for which many architectures are possible. Convolutional neural networks are capable of learning and extracting combinations of image features to render a result, such as the localization of the left ventricular apex (bottom right) without being explicitly programmed to look for specific features.

As neural network research has grown, high level software frameworks like Keras were developed, allowing increased accessibility of image analysis and enabling its use in multiple other fields, including medicine15. Medical imaging has presented its own challenges, and network architectures originally built to operate on natural scene images needed to be optimized for medical image analysis16. These obstacles have sparked inventive solutions, including novel architectures such as UNet, which was designed for semantic segmentation in the context of challenges that are commonly encountered in biomedical images (eg. small sample sizes and high image resolution)17. Numerous networks have since been developed to solve a diverse range of problems in medicine, such as organ volumetry and lesion identification (Figure 4 and 5).

Figure 4.

Figure 4.

Architecture of a cascaded system of multiple neural networks, each building upon the outputs of a preceding network. In this example, the (a) proposal network to identifies candidate pulmonary nodules, the (b) classification network identifies “true” pulmonary nodules from false-positives, and the (c) segmentation network delineates the boundaries of each nodule. Together, the proposal and classification networks may be used as a method for computer-aided detection (CAD).

Figure 5.

Figure 5.

Results of a cascaded convolutional neural network designed for pulmonary nodule detection and segmentation. The neural networks appear to overcome limitations of earlier algorithms in identifying boundaries of nodules near pleural surfaces.

Realizing clinical usage of this technology in medicine presents a range of unique complexities that may be new to the machine learning community. Challenges that must be considered include integration into the healthcare system while maintaining patient security and privacy, wide-ranging normal and abnormal human morphology, and liability and accountability concerns. Artificial intelligence heralds a sea change in medical imaging, offering an opportunity for radiologists to be leaders in, and shape the field of computer-aided diagnostics. Early adopters of technologies have the opportunity to set precedents and standards for their use and validation. As such, radiologists may play a critical role to shape the role of machine learning-based diagnostics in clinical practice.

Applications in Cardiovascular and Thoracic Imaging

One of the earliest introductions of machine learning into cardiovascular medicine involved reading and interpreting ECG tracings, a technology that has now been incorporated into everyday clinical practice18 and has benefited from the development of convolutional neural networks that allow detection and classification of arrythmias19-21. Machine learning has since been applied to various tasks in cardiovascular and thoracic imaging including segmentation, characterization, quantification, lung nodule detection and measurement, and lung cancer prognosis and treatment. Many of these applications have been consolidated in prior reviews22-24 and we provide a survey of those relevant to cardiovascular imaging in Table 1 and those relevant to thoracic imaging in Table 2.

Table 1.

Applications of Machine Learning in Cardiovascular Imaging

Modality Ref Author Description ML Technique
MRI 25 Avendi 2017 RV segmentation CNN
27 Winther 2017 RV, LV endocardium and epicardium CNN
28 Tan 2018 LV segmentation ANN
33 Baessler 2018 Myocardial scar detection Random forests
34 Dawes 2017 Pulmonary hypertension prognosis PCA
ECHO 35 Ortiz 1995 HF prognosis ANN
36 Narula 2016 HCM vs athlete's heart SVM, Random forests, ANN
37 Sengupta 2016 constrictive pericarditis vs restrictive cardiomyopathy AMC, random forest, k-NN, SVM
38 Sengur 2012 valvular disease SVM
39 Moghaddasi 2016 MR severity SVM
41 Sudarshan 2015 MI detection SVM
CT 43 Wolterink 2016 CAC scoring CNN
45 Isgum 2012 CAC scoring k-NN, SVM
52 Itu 2016 FFR estimation deep neural network
53 Motwani 2017 Prognosis Logistic regression
54 Mannil 2018 MI detection Decision tree, k-NN, random forest, ANN

Table 2.

Applications of Machine Learning in Thoracic Imaging

Topic Ref Author Description ML Technique
Lung Nodules 61 Lo 2018 Pulmonary nodule detection CADe
Radiomics 63 Li 2018 NSCLC prognosis Unsupervised 2-way clustering
65 Song 2016 NSCLC prognosis SVM
Radiogenomics 71 Yamamoto 2014 Genetic classification Random Forest
COPD 73 Ying 2016 COPD classification Deep neural network
74 Gonzalez 2012 COPD staging CNN
Abdominal aortic thrombus 75 Lopez-Linares 2018 Thrombus detection CNN

Applications to Cardiovascular Disease

One major emphasis of machine learning applications in cardiac MRI is ventricular segmentation for quantification of volumetry and function. This is a particularly attractive problem for machine learning as it is typically a time-consuming aspect of these exams, often requiring manual outlining by a skilled operator. Deep learning algorithms have been generated that approximate the cardiac measurements of expert readers25-29. One such algorithm proposed by Avendi et al. was well correlated with ground-truth measurements (0.99 for end-systolic and 0.98 for end diastolic)25. Most recently, several commercial vendors have begun to take an interest in this technology and to integrate these algorithms into their software30-32. Outside of cardiac segmentation on MRI, algorithms have also been developed to detect subacute or chronic myocardial scar33 and to predict patient survival and mechanisms of right heart failure in pulmonary hypertension34.

Machine learning techniques have also been applied to characterizing cardiac disease on echocardiography (ECHO). Ortiz et al. used neural networks to analyze cardiac contractility to predict one-year mortality in patients with heart failure35. Since this early work, supervised machine learning techniques have used ECHO to differentiate hypertrophic cardiomyopathy from athlete’s heart36, classify and differentiate constrictive pericarditis from restrictive cardiomyopathy37, diagnose valvular heart disease38, grade severity of mitral valve regurgitation39, automate ejection fraction measurement40, and detect the presence of myocardial infarction41,42.

Several machine learning applications have also been developed to assist in the interpretation of CT. For example, algorithms have been developed for automation of coronary artery calcium scoring43-46 and assessment of functional significance of coronary lesions. More recently, ML techniques have been leveraged to better assess the hemodynamic significance of coronary stenosis. It has recently become possible to simulate the results obtained from an intracoronary pressure wire at the time of coronary catheterization with computational fluid simulations based on coronary CTA, the so-called fractional flow reserve (FFR)47-49. However, the time and expertise needed for coronary segmentation and flow simulation may delay availability of results and limit its use. With this concept in mind, several groups have attempted to use machine learning approaches to accomplish this similar task50-52. Other applications of machine learning on CT include prediction of 5-year all-cause mortality53, and detecting the presence of myocardial infarct using texture analysis methods54.

Applications to Thoracic Imaging

Outside of the heart, another area of recent rapid development has been detection and measurement of lung nodules. Lung cancer has long been the leading cause of cancer-related mortality in the United States, and significant attention has been focused on establishing screening guidelines for the purpose of earlier detection. A landmark study published in 2011 compared low dose CT with CXR in screening high-risk patients for pulmonary nodules55. Despite low incidence of detected cancer in the patient group and high false positive rates, the study was terminated early, as screening with low dose CT showed reduced mortality from lung cancer. In response, the United States Preventative Services Task Force recommended routine screening for this high risk group, which is predicted to increase the number of pulmonary CTs performed annually56. The volume of studies combined with the time-consuming nature of detection, measurement, and comparison of pulmonary nodules, has prompted a boom in thoracic machine learning research. To address the need to improve accuracy and efficiency in utilizing machine learning approaches multiple research groups and challenges have focused on developing computer-aided detection (CAD) algorithms for pulmonary nodules57,58.

In the late 2000s, there was considerable activity and interest in developing CAD algorithms to address this utilizing traditional machine learning techniques59. These programs were shown to slightly improve radiologist detection of pulmonary nodules when used concurrently in interpretation. However, sensitivity of these algorithms by themselves were relatively low compared to experienced radiologists. They were also often associated with a high false-positive rate, leading to increased time spent examining false nodules detected by the software. In addition, detection of nodules adjacent to structures such as vessels or pleura tended to be missed by the software60. This is an issue for both CAD programs and radiologists, as difficulty with confidently identifying the borders of a nodule can decrease the ability to assess growth and changes over time, a characteristic imperative for lung cancer treatment planning and follow up. More recently, a CAD nodule detection system evaluated by Lo et al. incorporated a pulmonary vessel image suppression function61. This improved detection of nodules that were initially missed due to their close relationship to vessels, and decreased interpretation time. Lung nodule detection increased from 64% to 80%, though at the cost of a slight reduction in specificity61. While traditional machine learning approaches like this still yield modest nodule sensitivity in this range of performance59, early data from public challenges seems to show that deep learning approaches may improve upon this considerably62.

The explosion of available data in conjunction with advances in machine learning has opened the field of image analysis to an exciting direction with potential to predict patient prognosis and even response to treatment. Components of a software package developed at UPenn were used to stratify patients with similar treatments for early Non-small cell lung cancer (NSCLC) into two distinct survival groups. This was achieved using an unsupervised clustering analysis method based on distinctive radiographic imaging features63. This field of image analysis, termed “radiomics”, is geared at characterizing image features and correlating them with tumor phenotype with the intent of classifying and staging tumors noninvasively. Extracted features convert radiographic images into mineable data and can be employed to build predictive and prognostic models24. Subsequently, several groups have used machine learning to associate phenotypic descriptors on CT with overall survival64,65 or disease-free survival66 in NSCLC. A review by Kolossvary et al. also describes how radiomic techniques may be implemented to assist with coronary artery calcium scoring67.

The related concept of “radiogenomics” encompasses the effort to combine imaging phenotypes and tumor genetic data to perform better prognostication and target therapeutic decisions. Vardhanabhuti et al. comprehensively reviews the development of this concept in lung cancer68. Of note, a recent publication by Zhou et al. found certain nodule imaging characteristics correlated with specific metagene groups, speculating that noninvasive imaging may help direct targeted therapeutic treatment through inference of the genetic or cell surface markers69. Significant interest has been placed on attempting to define a radiomic signature for individual gene mutations (eg. EGFR, ALK, K-ras), and to correlate this with treatment response to targeted inhibitory agents70-72. While machine learning was not utilized in most of these studies to connect gene expression data with individual imaging characteristics of a patient’s tumor, it is possible that deep learning may influence and expedite the next generation of radiomic studies. Specifically, the use of deep learning and its ability to automate the process of “feature engineering” across all scales of imaging phenotypes may provide an opportunity to bridge the gap between genetic, histologic, and imaging data.

The opportunities for machine learning to assist in diagnosis, prognostication and treatment certainly are not limited to cardiac disease and oncologic applications. Other promising areas of thoracic research include the use of machine learning for aortic segmentation, thrombus detection, and fibrotic and COPD disease classification73-76.

Developing Communities, Competitions, and the Challenges of Public Data Sets

The rapid growth of machine learning technologies and their potential applications has prompted the formation and growth of multiple research conferences. This includes the Conference on Machine Intelligence in Medical Imaging (C-MIMI), and has encouraged the formation of machine learning interest groups within existing radiological societies such as the International Society for Magnetic Resonance in Medicine Machine Learning (ISMRM ML). A number of these conferences and groups have created open challenges aimed at promoting the development of algorithms for specific tasks or to solve clinical problems. The International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), is an annual conference which hosts multiple challenges across many fields, and is notable for highlighting cardiac segmentation, coronary artery reconstruction, and left ventricle shape modeling77. Several challenges have also been devoted to thoracic imaging, specifically lung nodule/lesion detection. The International Symposium on Biomedical Imaging (ISBI) offered a challenge called LUNA (LUng Nodule Analysis) in 2016 which called for algorithms that could automatically detect nodules on chest CT. Similar challenges were announced for the SPIE MI 2015 conference, the Data Science Bowl in 2017, and this year’s RSNA conference.

With these challenges, the organizers have offered datasets for the algorithms to be trained and tested on. The availability of large imaging datasets is currently a bottleneck for many areas of research and can be difficult to obtain outside of a major hospital-affiliated research group. Multiple factors contribute to this, including IRB approvals for data gathering, the need to assure patient privacy and anonymity, the complexities of working with DICOM files and their size, and the cost of assembling and maintaining a database. Several public repositories of thoracic CT images are available, however, and have fueled machine learning growth and algorithm development (reviewed in Morris et al.78). The National Lung Screening Trial (NLST) enabled the collection of a large volume of patient data for former smokers in an effort to determine the utility of low dose CT screening, a recommendation later made by the US Preventative Service Task Force56,79. This imaging databank, in addition to blood, sputum, and urine data, has been made available to researchers. Similarly, the UK Biobank study collected genetic data and imaging, and has made this available for research80, and the publicly available Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) contains over seven thousand annotated nodules for the purposes of CAD development81.

While machine learning scientists continue to innovate with the available data and the community works to address the issue of data availability, researchers have been able to use these challenge datasets to achieve excellent performance25,29,82. For example, the highly utilized UNet architecture was developed for, and won, a segmentation challenge for electron microscopy at ISBI 201217. Though primarily developed to segment neuronal structures in electron microscopy stacks, it has since been successfully applied to a variety of biomedical image segmentation tasks26,83. While these early studies show great promise, the most common training and testing sets from challenges are still limited in scope compared to the range of diseases that would be seen in clinical practice. Thus, these algorithms must yet prove themselves in the real world as larger and more routine clinical datasets become available.

With the rapidly expanding availability of imaging data and desire to develop clinically useful technology, there has been increasing effort to establish guidelines and offer guidance by larger radiological societies. The American College of Radiology (ACR) recently established a Data Science Institute (DSI) to steer the introduction of AI into the practice of radiology. The ACR has begun defining standards for training, testing, validating, integrating, and monitoring AI algorithms in clinical practice. To do this the Data Science Institute has defined “use cases” pinpointing precise scenarios within radiology workflows where potential automation could enhance patient care. These use cases are organized by organ system and include twenty five cardiac and one thoracic application offered in an open source directory labeled TOUCH-AI84. The ACR will also evaluate algorithm performance by providing ongoing post-market assessment85. A goal of developing standardized pathways for algorithm validation will be to expedite the FDA regulatory review process to allow these modalities to have a clinical impact as soon as possible while maintaining a high level of quality.

FDA Approval Process for New Technologies and Software

The Food and Drug Administration (FDA) currently has two approval processes for new medical devices relevant to imaging analysis software: the 510k clearance and pre-market approval (PMA) clearance. PMA clearance is a more stringent application path, and typically involves clinical trials to demonstrate safety and efficacy. 510k clearance does not require the same rigorous clinical trials necessary for a new class of drugs or devices, but demands that equivalence to existing products or technologies be shown86. In December of 2016 the FDA expedited the regulatory process for certain medical devices by adding its Breakthrough Devices provision to the 21st Century Cures Act. The goal was to “help patients have more timely access to devices and breakthrough technologies”, with submissions ranging from dental implants to deep learning-based computer aided diagnostic programs87. Through this, artificial intelligence in some forms can be classified as a medical device, and gain clearance through a separate “de novo” pathway without a predicate device for comparison87.

Many applications may fall under the umbrella of computer aided detection (CAD). The FDA notes two major CAD categories: CADe, which includes tools aimed at automating detection and focusing the attention of a clinician onto an area of an image, and CADx which includes tools aimed at assessment or likelihood of a disease and automation of diagnosis88. While CADe products fall under the 510k application path, CADx products have typically fallen into the more stringent PMA application path. As CADx products have potential for more significant patient impact, the FDA has historically been more cautious. Recently, however, the requirements for CADx tools have been reduced89. Certain cancer detection tools will now fall under a new category of device and can be approved under the less stringent 510K pathway. The growing number of applications integrating neural networks has likely contributed to this change. Even with more lenient approval processes, CAD products will continue to require regulatory attention going forward.

Validation, Utility and Generalizability

As the population ages and becomes more medically complex, the need for imaging will continue to increase. Deep learning-based diagnostic tools offer many potential benefits, and may increase physician efficiency, improve accuracy of diagnosis, and enhance consistency between different sites and institutions. As early adopters, radiologists have the opportunity to set the standards for this technology and direct evaluation of programs to ensure their accuracy and safety for patients. Bringing a technology to the clinic will involve several key elements: peer review, analytical validity, clinical validity, clinical utility, and generalizability. Within these, the overall performance compared to a ground truth, performance on individual components of the task, and mechanisms of failure, should be examined.

While the medical literature has historically undergone a process of peer-review, it is important to recognize that pre-print websites are increasingly used as a form of communication, especially in the machine learning community. In this rapidly changing and evolving field, a repository for the pre-release of manuscripts such as arXiv is valuable for sharing information and distributing cutting edge findings. However, it is important to recognize the limitations and pitfalls of this method of communication. Manuscripts on pre-release repositories are moderated upon submission, but are not required to undergo the peer review process, and new versions may be uploaded or updated at any time. Therefore, authors must use caution when making clinical decisions based on manuscripts from preprint websites until peer-review scrutiny is applied.

Analytical validity is typically assessed during the testing and development of new technologies, and tests the performance relative to pre-existing technologies. For example, an algorithm might measure the caliber of the aorta, and comparison of the measurements between the algorithm and a human observer may be sufficient. It is harder to assess analytical validity when the outcome metric is subjective. In such situations, a consensus among experts may be necessary. For example, when contouring cardiac chambers the apical and basal heart boundaries are a source of disagreement between readers and protocols vary between institutions. However, a consensus average of segmentations from a panel of experts can be helpful for training algorithms90. Analytical validity can also be established by analyzing algorithm performance on individual components of a task. Understanding where an algorithm fails can provide insight into areas that require human supervision. If an algorithm has particular difficulty segmenting the cardiac apex, this might not be readily apparent if only the overall similarity scores are considered (Figure 6). Further, these insights may direct research to improve the performance of the algorithms.

Figure 6.

Figure 6.

Consideration of the components of the overall task (e.g., dice per slice when segmenting the ventricles of the heart) can help identify sources of error and understanding of areas where oversight may be necessary. Without considering the individual components (e.g., considering only overall dice per volume), systematic errors may be overlooked.

Clinical validity is a concept distinct from analytical validity, and touches upon whether a technology works within the range of performance needed for clinical decision-making. In the example of an algorithm that measures the caliber of the ascending aorta, it is important to assess whether any differences in measurement between the algorithm and the current standard would impact the clinical management. In other words, it may be possible to prove differences between an algorithm and human observer, but these may not be clinically relevant. In the example of ventricular segmentation, the apex of the left ventricle may have considerably different contours between an algorithm and human observers, but may contribute very little to the total volume, and therefore have little impact on the overall volumes or ejection fraction.

Clinical utility is a related concept for determining the potential value of a new tool. This is where radiologists may play a key role in the development of technology. We may routinely only provide linear or bilinear measurements of pulmonary nodules and lymph nodes in daily clinical practice due to efficiency and time-constraints, but it may also be clinically useful to provide volumetric measurements. We may choose not to provide such measurements if the measurement tools are too cumbersome, inaccurate, or work too inconsistently to be useful. Deep convolutional neural networks have potential to address this need if well implemented into the radiologists’ workflow, and clinical domain knowledge is critical for directing the development of new applications. With such collaboration, new technologies will be more likely adopted into practice.

One major consideration for algorithms that are to be used in clinical practice is the extent to which they will generalize to clinical populations. The recent tragic accidents from Uber and Tesla have shown that overestimation of the robustness and generalizability of this technology can yield fatal results91,92. Many algorithms are trained on a limited patient population, or from public data sets of predominantly normal patients, and may not necessarily work on the typical population seen in clinical practice. Another pitfall is that algorithms may be developed on a single scanner type or imaging technique that is institution-specific. Such algorithms may not generalize to other scanners or protocols. For example, an algorithm designed to detect pulmonary nodules on 1.25-mm hard kernel reconstructions may not work as well on thick slices. As such, the developers of these tools must work to ensure that the patient populations and imaging protocols on which models are developed are representative of clinical patient populations, and users must appreciate limitations of models as they deploy them in clinical practice. One of the major advantages of modern deep learning models is their ability to learn and improve by being “fine-tuned” on new training data. This allows refinement of models, and retraining over time on broader populations.

Conclusion

The applications of deep neural networks and other machine learning technologies to long-standing problems in radiology are rapidly advancing and promise to shape the future of the specialty. Both supervised and unsupervised techniques have been applied in current technologies (see Table 1 and Table 2). They will likely become fundamental to practice, and soon, as pervasive and unnoticeable as other technologies we have integrated into daily use. In many ways these technologies are already a large part of our lives. Phones recognize us, speak to us, manage our schedules, and reroute our wrong turns. Entertainment systems and online shopping trackers recommend things we would enjoy based on previous selections and others with similar behaviors. These machine learning processes are so well used that they have become transparent in our daily activities. Outside of digital transcription, machine learning algorithms are only beginning to emerge in the daily practice of radiology. The breadth of problems that machine learning can help address is immense, and will likely mature rapidly in areas of detection, characterization and prognostication of disease, and individualized treatment decisions. As early adopters of this new technology, radiologists should be cautious consumers, and think critically about the new advancements to ensure that they are safe and effective tools in clinical practice. Integration of machine learning into the daily workflow has the potential to augment our capabilities and make radiologists more efficient, more focused on diagnosis and higher order tasks, and better able to address the needs of referring physicians and patients.

REFERENCES

  • 1.Hastie T, Tibshirani R, Friedman J. Springer Series in Statistics The Elements of Statistical Learning Data Mining, Inference, and Prediction. [Google Scholar]
  • 2.Rosenblatt F, Rosenblatt F. The Perceptron: A Probabilistic Model for Information Storage and Organization in The Brain. Psychol Rev. 1958:65–-386. [DOI] [PubMed] [Google Scholar]
  • 3.Minsky M, Papert S. Perceptrons; an Introduction to Computational Geometry. MIT Press; 1969. [Google Scholar]
  • 4.Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323(6088):533–536. doi: 10.1038/323533a0 [DOI] [Google Scholar]
  • 5.Cybenko G Approximation by superpositions of a sigmoidal function. Math Control Signals, Syst. 1989;2(4):303–314. doi: 10.1007/BF02551274 [DOI] [Google Scholar]
  • 6.Lecun Y, Boser B, Denker JS, et al. Handwritten digit recognition with a back-propagation network. 1990. [Google Scholar]
  • 7.Hinton GE, Osindero S, Teh Y-W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006;18(7):1527–1554. doi: 10.1162/neco.2006.18.7.1527 [DOI] [PubMed] [Google Scholar]
  • 8.Russakovsky O, Deng J, Su H, et al. ImageNet Large Scale Visual Recognition Challenge. [Google Scholar]
  • 9.Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Proc 25th Int Conf Neural Inf Process Syst -Vol 1. 2012:1097–1105. [Google Scholar]
  • 10.Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. September 2014. [Google Scholar]
  • 11.Szegedy C, Liu W, Jia Y, et al. Going Deeper with Convolutions. [Google Scholar]
  • 12.He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. December 2015. [Google Scholar]
  • 13.Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. November 2013. [Google Scholar]
  • 14.Long J, Shelhamer E, Darrell T. Fully Convolutional Networks for Semantic Segmentation. November 2014. [DOI] [PubMed] [Google Scholar]
  • 15.Keras Chollet F.. 2018. [Google Scholar]
  • 16.Ding J, Li A, Hu Z, Wang L. Accurate Pulmonary Nodule Detection in Computed Tomography Images Using Deep Convolutional Neural Networks. June 2017. [Google Scholar]
  • 17.Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Springer, Cham; 2015:234–241. doi: 10.1007/978-3-319-24574-4_28 [DOI] [Google Scholar]
  • 18.Maglaveras N, Stamkopoulos T, Diamantaras K, Pappas C, Strintzis M. ECG pattern recognition and classification using non-linear transformations and neural networks: a review. Int J Med Inform. 52(1-3):191–208. [DOI] [PubMed] [Google Scholar]
  • 19.Kiranyaz S, Ince T, Gabbouj M. Real-Time Patient-Specific ECG Classification by 1-D Convolutional Neural Networks. IEEE Trans Biomed Eng. 2016;63(3):664–675. doi: 10.1109/TBME.2015.2468589 [DOI] [PubMed] [Google Scholar]
  • 20.Afsar FA. Physiological Measurement Detection of ST segment deviation episodes in ECG using KLT with an ensemble neural classifier Recent citations. 2008. doi: 10.1088/0967-3334/29/7/004 [DOI] [PubMed] [Google Scholar]
  • 21.Rajpurkar P, Hannun AY, Haghpanahi M, Bourn C, Ng AY. Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks. [Google Scholar]
  • 22.Al SJ, Anchouche K, Singh G, et al. Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging. doi: 10.1093/eurheartj/ehy404 [DOI] [PubMed] [Google Scholar]
  • 23.Singh G, Al’Aref SJ, Van Assen M, et al. Machine learning in cardiac CT: Basic concepts and contemporary data. J Cardiovasc Comput Tomogr. 2018;12(3):192–201. doi: 10.1016/j.jcct.2018.04.010 [DOI] [PubMed] [Google Scholar]
  • 24.Thawani R, McLane M, Beig N, et al. Radiomics and radiogenomics in lung cancer: A review for the clinician. Lung Cancer. 2018;115:34–41. doi: 10.1016/j.lungcan.2017.10.015 [DOI] [PubMed] [Google Scholar]
  • 25.Avendi MR, Kheradvar A, Jafarkhani H. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach. Magn Reson Med. 2017;00. doi: 10.1002/mrm.26631 [DOI] [PubMed] [Google Scholar]
  • 26.Lieman-Sifry J, Le M, Lau F, Sall S, Golden D. c. April 2017. [Google Scholar]
  • 27.Winther HB, Schmidt B, Wacker FK, Vogel-claussen J. v -net: Deep Learning for Generalized Biventricular Cardiac Mass and. 2017;(June). [DOI] [PubMed] [Google Scholar]
  • 28.Tan LK, McLaughlin RA, Lim E, Abdul Aziz YF, Liew YM. Fully automated segmentation of the left ventricle in cine cardiac MRI using neural network regression. J Magn Reson Imaging. 2018;48(1):140–152. doi: 10.1002/jmri.25932 [DOI] [PubMed] [Google Scholar]
  • 29.Petitjean C, Zuluaga MA, Bai W, et al. Right ventricle segmentation from cardiac MRI: A collation study. Med Image Anal. 2015. doi: 10.1016/j.media.2014.10.004 [DOI] [PubMed] [Google Scholar]
  • 30.Cardio - Arterys.
  • 31.Cardiac MRI and CT Software – Circle Cardiovascular Imaging.
  • 32.HeartVista Cardiac Package ∣ HeartVista.
  • 33.Baessler B, Mannil M, Oebel S, Maintz D, Alkadhi H, Manka R. Subacute and Chronic Left Ventricular Myocardial Scar: Accuracy of Texture Analysis on Nonenhanced Cine MR Images. Radiology. 2018. doi: 10.1148/radiol.2017170213 [DOI] [PubMed] [Google Scholar]
  • 34.Dawes TJW, De Marvao A, Shi W, et al. Machine learning of Three-dimensional right Ventricular Motion enables Outcome Prediction in Pulmonary hypertension: A Cardiac MR Imaging Study 1. Radiology. 283. doi: 10.1148/radiol.2016161315 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Ortiz J, Ghefter CG, Silva CE, Sabbatini RM. One-year mortality prognosis in heart failure: a neural network approach based on echocardiographic data. J Am Coll Cardiol. 1995;26(7):1586–1593. doi: 10.1016/0735-1097(95)00385-1 [DOI] [PubMed] [Google Scholar]
  • 36.Narula S, Shameer K, Mabrouk A, Omar S, Dudley JT, Sengupta PP. Machine-Learning Algorithms to Automate Morphological and Functional Assessments in 2D Echocardiography.; 2016. doi: 10.1016/j.jacc.2016.08.062 [DOI] [PubMed] [Google Scholar]
  • 37.Sengupta PP, Wiener Cardiovascular Institute MA, Marie-Josée the, et al. A Cognitive Machine Learning Algorithm for Cardiac Imaging: A Pilot Study for Differentiating Constrictive Pericarditis from Restrictive Cardiomyopathy HHS Public Access. Circ Cardiovasc Imaging. 2016;9(6). doi: 10.1161/CIRCIMAGING.115.004330 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Sengur A Support Vector Machine Ensembles for Intelligent Diagnosis of Valvular Heart Disease. J Med Syst. 2012;36(4):2649–2655. doi: 10.1007/s10916-011-9740-z [DOI] [PubMed] [Google Scholar]
  • 39.Moghaddasi H, Nourian S. Automatic assessment of mitral regurgitation severity based on extensive textural features on 2D echocardiography videos. 2016. doi: 10.1016/j.compbiomed.2016.03.026 [DOI] [PubMed] [Google Scholar]
  • 40.Knackstedt C, Bekkers SCAM, Schummers G, et al. Fully Automated Versus Standard Tracking of Left Ventricular Ejection Fraction and Longitudinal Strain The FAST-EFs Multicenter Study. Vol 66.; 2015. doi: 10.1016/j.jacc.2015.07.052 [DOI] [PubMed] [Google Scholar]
  • 41.Vidya KS, Ng EYK, Acharya UR, Chou SM, Tan RS, Ghista DN. Computer-aided diagnosis of Myocardial Infarction using ultrasound images with DWT, GLCM and HOS methods: A comparative study. Comput Biol Med. 2015;62:86–93. doi: 10.1016/j.compbiomed.2015.03.033 [DOI] [PubMed] [Google Scholar]
  • 42.Sudarshan VK, Rajendra Acharya U, Ng EYK, San Tan R, Chou SM, Ghista DN. Data mining framework for identification of myocardial infarction stages in ultrasound: A hybrid feature extraction paradigm (PART 2). 2016. doi: 10.1016/j.compbiomed.2016.01.029 [DOI] [PubMed] [Google Scholar]
  • 43.Wolterink JM, Leiner T, De Vos BD, Van Hamersvelt RW, Viergever MA, Išgum I. Automatic coronary artery calcium scoring in cardiac CT angiography using paired convolutional neural networks. Med Image Anal. 2016;34:123–136. doi: 10.1016/j.media.2016.04.004 [DOI] [PubMed] [Google Scholar]
  • 44.Takx RAP, De Jong PA, Leiner T, et al. Automated Coronary Artery Calcification Scoring in Non-Gated Chest CT: Agreement and Reliability. 2014. doi: 10.1371/journal.pone.0091239 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Isgum I, Prokop M, Niemeijer M, Viergever MA, van Ginneken B. Automatic Coronary Calcium Scoring in Low-Dose Chest Computed Tomography. IEEE Trans Med Imaging. 2012;31(12):2322–2334. doi: 10.1109/TMI.2012.2216889 [DOI] [PubMed] [Google Scholar]
  • 46.Išgum I, de Vos BD, Wolterink JM, et al. Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT. J Nucl Cardiol. April 2017. doi: 10.1007/s12350-017-0866-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Min JK, Taylor CA, Achenbach S, et al. Noninvasive Fractional Flow Reserve Derived From Coronary CT Angiography. Vol 8.; 2015. doi: 10.1016/j.jcmg.2015.08.006 [DOI] [PubMed] [Google Scholar]
  • 48.Taylor CA, Gaur S, Leipsic J, et al. Effect of the ratio of coronary arterial lumen volume to left ventricle myocardial mass derived from coronary CT angiography on fractional flow reserve. J Cardiovasc Comput Tomogr. 2017;11(6):429–436. doi: 10.1016/j.jcct.2017.08.001 [DOI] [PubMed] [Google Scholar]
  • 49.Kurata A, Lubbers MM, Kurata A, et al. Coenen Adriaan, MD Fractional Flow reserve computed from noninvasive cT angiography Data: Diagnostic Performance of an On-Site Clinician-operated Computational Fluid Dynamics Algorithm 1. Radiol n Radiol. 2015;274(3). doi: 10.1148/radiol.14140992 [DOI] [PubMed] [Google Scholar]
  • 50.Coenen A, Kim Y-H, Kruk M, et al. Diagnostic Accuracy of a Machine-Learning Approach to Coronary Computed Tomographic Angiography–Based Fractional Flow Reserve. Circ Cardiovasc Imaging. 2018;11(6):e007217. doi: 10.1161/CIRCIMAGING.117.007217 [DOI] [PubMed] [Google Scholar]
  • 51.Duguay TM, Tesche C, Vliegenthart R, et al. Coronary Computed Tomographic Angiography-Derived Fractional Flow Reserve Based on Machine Learning for Risk Stratification of Non-Culprit Coronary Narrowings in Patients with Acute Coronary Syndrome. Am J Cardiol. 2017;120(8):1260–1266. doi: 10.1016/j.amjcard.2017.07.008 [DOI] [PubMed] [Google Scholar]
  • 52.Itu L, Rapaka S, Passerini T, et al. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography. J Appl Physiol. 2016;121:42–52. doi: 10.1152/japplphysiol.00752.2015.-Frac [DOI] [PubMed] [Google Scholar]
  • 53.Motwani M, Dey D, Berman DS, et al. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: A 5-year multicentre prospective registry analysis. Eur Heart J. 2017. doi: 10.1093/eurheartj/ehw188 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Mannil M, von Spiczak J, Manka R, Alkadhi H. Texture Analysis and Machine Learning for Detecting Myocardial Infarction in Noncontrast Low-Dose Computed Tomography. Invest Radiol. 2018;53(6):338–343. doi: 10.1097/RLI.0000000000000448 [DOI] [PubMed] [Google Scholar]
  • 55.Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening. 2011. doi: 10.1056/NEJMoa1102873 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Final Update Summary: Lung Cancer: Screening - US Preventive Services Task Force.
  • 57.Valente IRS, Cortez PC, Neto EC, Soares JM, de Albuquerque VHC, Tavares JMRS. Automatic 3D pulmonary nodule detection in CT images: A survey. Comput Methods Programs Biomed. 2016;124:91–107. doi: 10.1016/j.cmpb.2015.10.006 [DOI] [PubMed] [Google Scholar]
  • 58.Luna 16 Grand Challenge. [Google Scholar]
  • 59.Goo JM. A Computer-Aided Diagnosis for Evaluating Lung Nodules on Chest CT: The Current Status and Perspective. Korean J Radiol. 2011;12(2):145–155. doi: 10.3348/kjr.2011.12.2.145 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Das M, Ley-Zaporozhan J, Gietema HA, et al. Accuracy of automated volumetry of pulmonary nodules across different multislice CT scanners. Eur Radiol. 2007;17(8):1979–1984. doi: 10.1007/s00330-006-0562-1 [DOI] [PubMed] [Google Scholar]
  • 61.Lo SB, Freedman MT, Gillis LB, White CS, Mun SK. Computer-Aided Detection of Lung Nodules on CT With a Computerized Pulmonary Vessel Suppressed Function. 2018;(March):480–488. [DOI] [PubMed] [Google Scholar]
  • 62.Setio AAA, Traverso A, de Bel T, et al. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Med Image Anal. 2017;42:1–13. doi: 10.1016/j.media.2017.06.015 [DOI] [PubMed] [Google Scholar]
  • 63.Li H, Galperin-Aizenberg M, Pryma D, Simone CB, Fan Y. Unsupervised machine learning of radiomic features for predicting treatment response and overall survival of early stage non-small cell lung cancer patients treated with stereotactic body radiation therapy. Radiotherapy and Oncology. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Aerts HJWL, Rios Velazquez E, Leijenaar RTH, et al. ARTICLE Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat Commun. 2014;5. doi: 10.1038/ncomms5006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Song J, Liu Z, Zhong W, et al. Non-small cell lung cancer: quantitative phenotypic analysis of CT images as a potential marker of prognosis. Nat Publ Gr. 2016. doi: 10.1038/srep38282 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Huang Y, Liu Z, He L, et al. Radiomics Signature: A Potential Biomarker for the Prediction of Disease-Free Survival in Early-Stage (I or II) Non—Small Cell Lung Cancer. Radiology. 2016;281(3):947–957. doi: 10.1148/radiol.2016152234 [DOI] [PubMed] [Google Scholar]
  • 67.Kolossváry M, Kellermayer M, Merkely B, Maurovich-Horvat P. Cardiac Computed Tomography Radiomics. J Thorac Imaging. 2018;33(1):26–34. doi: 10.1097/RTI.0000000000000268 [DOI] [PubMed] [Google Scholar]
  • 68.Vardhanabhuti V, Kuo MD. Lung Cancer Radiogenomics. J Thorac Imaging. 2018;33(1):17–25. doi: 10.1097/RTI.0000000000000312 [DOI] [PubMed] [Google Scholar]
  • 69.Zhou M, Leung A, Echegaray S, et al. Non–Small Cell Lung Cancer Radiogenomics Map Identifies Relationships between Molecular and Imaging Phenotypes with Prognostic Implications. Radiology. 2017;286(1):161845. doi: 10.1148/radiol.2017161845 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Aerts HJWL, Grossmann P, Tan Y, et al. Defining a Radiomic Response Phenotype: A Pilot Study using targeted therapy in NSCLC. doi: 10.1038/srep33860 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Yamamoto S, Korn RL, Oklu R, et al. ALK Molecular Phenotype in non-small cell lung cancer: CT Radiogenomic Characterization 1. Radiol n Radiol. 2014;272(2):568. doi: 10.1148/radiol.14140789 [DOI] [PubMed] [Google Scholar]
  • 72.Rizzo S, Petrella F, Buscarino V, et al. CT Radiogenomic Characterization of EGFR, K-RAS, and ALK Mutations in Non-Small Cell Lung Cancer. doi: 10.1007/s00330-015-3814-0 [DOI] [PubMed] [Google Scholar]
  • 73.Ying J, Dutta J, Guo N, et al. Classification of Exacerbation Frequency in the COPDGene Cohort Using Deep Learning with Deep Belief Networks. IEEE J Biomed Heal Informatics. December 2016:1–1. doi: 10.1109/JBHI.2016.2642944 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.González G, Ash SY, Vegas Sanchez-Ferrero G, et al. Disease Staging and Prognosis in Smokers Using Deep Learning in Chest Computed Tomography. Am J Respir Crit Care Med. September 2017:rccm.201705–0860OC. doi: 10.1164/rccm.201705-0860OC [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.López-Linares K, Aranjuelo N, Kabongo L, et al. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks. Med Image Anal. 2018;46:202–214. doi: 10.1016/j.media.2018.03.010 [DOI] [PubMed] [Google Scholar]
  • 76.Kim SY, Diggans J, Pankratz D, et al. Classification of usual interstitial pneumonia in patients with interstitial lung disease: assessment of a machine learning approach using high-dimensional transcriptional data. Lancet Respir Med. 2015;3(6):473–482. doi: 10.1016/S2213-2600(15)00140-X [DOI] [PubMed] [Google Scholar]
  • 77.miccai2018. WORKSHOP & CHALLENGE & TUTORIAL.
  • 78.Morris MA, Saboury B, Burkett B, Gao J, Siegel EL. Reinventing Radiology: Big Data and the Future of Medical Imaging. J Thorac Imaging. 2018;33(1):4–16. doi: 10.1097/RTI.0000000000000311 [DOI] [PubMed] [Google Scholar]
  • 79.NLST. Datasets - NLST - The Cancer Data Access System.
  • 80.UK Biobank. Resources ∣ UK Biobank. [Google Scholar]
  • 81.Armato SG, Mclennan G, Bidaut L, et al. The Lung Image Database Consortium "LIDC… and Image Database Resource Initiative "IDRI…: A Completed Reference Database of Lung Nodules on CT Scans.; 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Ringenberg J, Deo M, Devabhaktuni V, Berenfeld O, Boyers P, Gold J. Fast, accurate, and fully automatic segmentation of the right ventricle in short-axis cardiac MRI. Comput Med Imaging Graph. 2014. doi: 10.1016/j.compmedimag.2013.12.011 [DOI] [PubMed] [Google Scholar]
  • 83.Le BM, Lieman-sifry J, Lau F, Sall S, Hsiao A, Golden D. Computationally Efficient Cardiac Views Projection Using 3D Convolutional Neural Networks. 2017:109–116. doi: 10.1007/978-3-319-67558-9 [DOI] [Google Scholar]
  • 84.TOUCH-AI Directory ∣ American College of Radiology.
  • 85.ACR Data Science Institute Structures Artificial Intelligence Development to Optimize Radiology Care ∣ American College of Radiology. [Google Scholar]
  • 86.Health C for D and R. Overview of Device Regulation.
  • 87.Health C for D and R. How to Study and Market Your Device - Expedited Access Pathway Program.
  • 88.Health C for D and R. Guidance Documents (Medical Devices and Radiation-Emitting Products) - Clinical Performance Assessment: Considerations for Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device Data - Premarket Approval (PMA) and Premarket N. [Google Scholar]
  • 89.Ridley E C-MIMI: FDA decision paves the way for imaging AI.
  • 90.Suinesiaputra A, Cowan BR, Al-Agamy AO, et al. A collaborative resource to build consensus for automated left ventricular segmentation of cardiac MR images. Med Image Anal. 2014. doi: 10.1016/j.media.2013.09.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian ∣ Technology ∣ The Guardian. https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe. Accessed September 30, 2018. [Google Scholar]
  • 92.Tesla autopilot is not having a great day. https://slate.com/technology/2018/09/tesla-autopilot-problems-elon-musk.html. Accessed September 30, 2018.

RESOURCES