Skip to main content
Seminars in Interventional Radiology logoLink to Seminars in Interventional Radiology
. 2022 Aug 31;39(3):341–347. doi: 10.1055/s-0042-1753524

Artificial Intelligence in Interventional Radiology

Joseph R Kallini 1,, John M Moriarty 1
PMCID: PMC9433147  PMID: 36062217

The concept of artificial intelligence (AI) elicits futuristic images in the minds of many, perhaps reminding them of films about dystopian worlds. References to AI go back to the 1950s when British computer scientist Alan Turing developed the Church-Turing thesis, which states that natural numbers cannot be computed by a human being unless they are computable by a Turing machine (computer). Due to the fact that computers were not powerful at the time, applications were limited. 1

John McCarthy, esteemed mathematician and computer scientist, coined the term “artificial intelligence” in 1956 during a highly attended summer workshop in Dartmouth. He complained that, “as soon as it works, no one calls it AI anymore.” 2 By 1959, machines began playing checkers better than humans. In 1965, McCarthy founded the Stanford Artificial Intelligence Laboratory, where groundbreaking research was conducted in computing and automation. 3

AI took a back seat during the 1970s and 1980s (the so-called AI winters), but major breakthroughs occurred soon afterward. The 1990s were the dawn of faster computers and big data. In 1997, Deep Blue—a chess-playing supercomputer developed by International Business Machines (IBM) Corporation—defeated reigning world champion Gary Kasparov in a pair of six-game chess matches. 1 IBM later developed Watson in 2011, the supercomputer that defeated two reigning champions at Jeopardy! This was also the year that McCarthy passed away.

AI in radiology, though still in its infancy, has made tremendous strides. AI has been investigated in interventional radiology (IR) as well, though to a lesser extent. This article will summarize the role of AI in radiology and IR thus far and delve into its future applications.

Understanding the Vocabulary

A large part of understanding the current state of AI is understanding the vocabulary. “AI,” “machine learning (ML),” and “deep learning (DL)” are commonly used terms that are, unfortunately, not synonymous. The following is a broad overview of the definitions relevant to radiology and IR.

Artificial Intelligence

Artificial intelligence, also known as machine intelligence, is a broad umbrella term used to denote a machine's cognitive abilities that are akin to those of the human mind. 1 Lynn Parker, director of the Division of Information and Intelligent System of National Science Foundation, defines AI as “a broad set of methods, algorithms, and technologies that make a software ‘smart’ in a way that may seem human-like to an outside observer.” 2

AI is essentially an umbrella term for a variety of applications and techniques ( Fig. 1 ). In a sense, the term “AI” resembles terms like “science,” “biology,” and “mathematics” since there are multiple completely unrelated branches that fall under these broad categories. For example, AI is used in commercial software in the form of speech processing, navigation systems, video games, and meeting scheduling. AI also involves hardware in the form of robotics. In radiology, the most commonly cited form of AI is ML. 4

Fig. 1.

Fig. 1

Branches of artificial intelligence. (Derived from Andriole. 4 )

The current era of AI has been referred to as “narrow AI” or “weak AI,” in which the AI specializes in one limited area or task. In that regard, one form of AI can be powerful enough to beat a reigning world chess champion but would not know how to beat a 5-year-old at tic-tac-toe. 2 We are far from the kind of AI that Hollywood constantly describes: artificial general intelligence, also known as “strong AI” or “human level AI,” where the machine can reason or be “self-aware.”

Machine Learning

Machine learning, a subset of AI, is the process in which a machine learns from its environment and uses the knowledge for future applications. 1 Though this sounds elusive, ML is best understood as a class of advanced statistical techniques in which the program develops algorithms based on input datasets and then applies those algorithms to unseen datasets. 2 A simplified analogy is how processing programs construct a best fit line for multiple provided data points. Machine “learning” simply uses iterative techniques to make predictions from available data and then apply the trends to new data. The ideal ML model should include inputs that are relevant to the outcome and generalizable enough to be applied to the given population.

Within medicine, the ML process is much complex and variable. Most ML applications focus on predicting outcomes, patient distribution, and detection based on epidemiological and trial-driven input. In radiology, ML requires a great deal of computing power, datasets, and materials. ML has gained a lot of press, especially at recent Radiological Society of North America (RSNA) annual meetings. 4 5 6 Typical ML tasks in radiology are the identification of specific patterns/conditions of image segmentation, partitioning digital images into meaningful parts (i.e., pixels or segments) for interpretation. Applications of ML include, but are not limited to, fatty liver detection on ultrasound, carotid plaque characterization on computed tomography (CT), and CT coronary angiography lesion detection. 7 Due to the growing influence of ML on vital decision making in medicine, many have advocated for methods to evaluate the accuracy of ML algorithms (cross-validation). 5

Artificial Neural Network

Machine learning itself has many sub-fields, one of which is neural networks, also known as “artificial neural networks” (ANNs). 2 ANN approximates the complex relationship between paired inputs and outputs by using a simple set of building blocks loosely inspired by biological neurons. Each neuron consists of a set of weights and biases. The inputs to a neuron are multiplied by their corresponding weight and summed. The resulting scalar value is then passed through a nonlinear activation function. An ANN is trained using a technique called “back propagation,” which iteratively modifies the weights given a set of input and output training pairs. As with all ML, ANNs are a form of complex math and statistics. Keep in mind that actual biological brain connections are much more complex. 2

Convolutional neural networks (CNNs) are a special type of ANN adapted to regular signals such as images in radiology. A simple CNN contains five layers: input, convolution, pooling, a fully connected layer, and an output layer 8 ( Fig. 2 ). A “convolution” involves taking one of more inputs in the form of matrices (which may, for example, correspond to an image dataset), matching them onto an image/pixel, multiplying them by an arbitrary value, and transposing the value into a new matrix. This technique is applied to imaging in edge detection and sharpening features, which accentuates the data found at edges/interfaces. This is also used to bin or compress large data (at the cost of information/resolution) by transforming multiple pixels into one representative pixel. 9

Fig. 2.

Fig. 2

A pictorial representation of the codependence and interconnectivity of neurons in a convolutional neural network. (Adapted from Phung. 8 )

Deep Learning

Deep learning, also a subset of ML, utilizes many ANN layers and allows a more sophisticated performance desirable for imaging. 2 In radiology, DL surpasses traditional ML in that the user does not have to hand-craft the features going into the models; the machine extracts the features autonomously and classifies them as it goes. Not only is this easier, but the machine may be better at extracting features that are not humanly intuitive. For example, the human eye sees only approximately 2 7 to 2 8 bits. CT scans register 2 16 bits of information (which is why the radiologist needs window to see all of the data). Contrary to humans, the machine does not have this visual deficit. 9

A recent advance in DL is generative adversarial networks (GANs), in which a computer is trained to create images that look “real” even though they are completely synthetic ( Fig. 3 ). 10 One can observe GANs' potential at the Web site https://thispersondoesnotexist.com , which artificially generates faces of what convincingly look like real human beings. GANs utilize a “generator” and a “discriminator.” The generator creates a fake distribution, and the discriminator decides whether the distribution came from the generator or if it is actually real data. This constant “battle” between the generator and discriminator is the reason these are termed “adversarial” neural networks. The system improves with multiple iterations. A special type of GANs (called conditioned GANs) is used to transform magnetic resonance (MR) sequences to artificially rendered CT, PET to attenuation correction CT, and one MR sequence to another—all of which can be useful in improving diagnoses and avoiding extra imaging/radiation. 11

Fig. 3.

Fig. 3

Pictorial representations of how generative adversarial networks work. (Adapted from Erickson. 10 )

Indeed, DL is a trillion-dollar industry that has been strongly driven by the corporate sector: Apple, Google, Microsoft, Facebook. 4 The most pressing issue with DL and GANs is the massive amount of data needed to develop trustworthy results. Since GANs have many parameters, they need millions of examples. For instance, there are plenty of photos of human faces in direct profile for GANs to analyze, resulting in the convincing artificial photos seen online. However, this is not the case for cats. Most cat photos come in multiple poses, which limits GANs' ability to construct artificial cats accurately, as seen in many of the cat reconstructions from a similar Web site: https://thiscatdoesnotexist.com .

Applications to Interventional Radiology

AI has been fleshed out to some extent in the diagnostic arena. However, its applications in IR are in their infancy and vastly variable. The IR community is currently seeking answers for how to apply AI to IR, so much so that the Society of Interventional Oncology held its first Artificial Intelligence Hackathon, a 1 hour open session in which teams pitched their AI-based research project proposals to panelists and industry representatives for a grand prize of a $7,000. 12

Although there are limitless possibilities in IR, AI has potential to improve pre- and periprocedural imaging, clinical predictive models, robotics, automation of repetitive tasks, workflow, and teaching. 13

Imaging

Many of the AI applications in diagnostics are directly translatable to preinterventional planning. Volumetric analysis has been used for many years to determine liver segment size in preparation for locoregional therapy. 14 AI is already being used in periprocedural angiography to assess vessel stenosis, flow dynamics, and stent size planning. 15 Vessel tracking software for hepatic vasculature is available for hepatic angiography and has even been extended to outside the liver. 16 Multimodality imaging fusion has also been studied and has become commercially available: ultrasound/CT for tumor ablation, 17 ultrasound/MRI, and ultrasound/positron emission tomography/CT. 18

As previously mentioned, GANs have been used to artificially transform preoperative MR into artificially rendered CT, which can be helpful to the interventional radiologist in studying vascular anatomy. One can imagine the role of GANs applied to preprocedural arterial-phase CT or MR to construct artificial digital subtraction angiography in various projections, which can be made available to the interventionalist even before the procedure takes place. This can save interventionalists a great deal of time and fluoroscopy during complex procedures like prostatic artery embolization, where the target arteries may be difficult or nearly impossible to identify. This can also be extremely helpful in planning studies for hepatic radioembolization, which often requires a mapping visceral angiogram to be performed 2 to 3 weeks prior to treatment. Once GANs technology matures, it may even be possible to do away with mapping angiography altogether. However, it is important to keep in mind that GANs produce images that are “like the real images,” not the real images themselves. This can be dangerous, as our reconstructed images would essentially be hallucinated data from the input imaging.

Clinical Decision Making

IR, as with all clinical specialties, takes great interest in patient prognostic indicators and predictors of treatment response. In the IR arena, there are a multitude of studies investigating such predictors. 19 20 21 22 23 24 Some investigators have studied AI for these purposes. Abajian et al used ML in pre-locoregional therapy MR to predict hepatocellular carcinoma response. 25 Another group utilized ANN to compare the prognostic performance of the albumin–bilirubin grade and the Child–Turcotte–Pugh score for hepatocellular carcinoma status post transarterial chemoembolization. 26 The greatest limitation in applying AI to such prognosticators is the paucity of data. More studies with millions of patients must be conducted to provide reliable results.

Procedural Accuracy

As the patient demand grows in IR, so does the need for periprocedural quality. It is unfortunate when a complex, invasive, and potentially life threatening biopsy procedure yields an indeterminant result. Furthermore, the advent of targeted patient therapy has necessitated more tissue than previously needed. ANN and DL have been employed in tissue sample analysis, which not only mitigates the amount of tissue needed but also can be used perioperatively for the interventionalist to confirm that the biopsy is diagnostic. 27 One can imagine AI's role in doing away with direct tissue sampling altogether.

Robotics and Navigation

Robotics and automation have played major roles for surgical subspecialties and have begun to do so for IR as well. A robot is “a reprogrammable, multifunctional manipulator designed to move materials, parts, tools, or other specialized devices through various programmed motions for the performance of a variety of tasks.” 28 As such, robots are a subset of AI, albeit much different from the forms prevalent in diagnostic radiology. Medical robots include ROBODOC (first medical robot invented in 1992 for orthopaedic surgery) and the Da Vinci for urology. In IR, the Light Puncture Robot has been utilized in France for CT/MR-guided thoracic and abdominal punctures. 29 In ultrasound, robot-assisted needle steering for percutaneous access has been investigated. 30 The AcuBot is a robot used in fluoroscopic interventions for perispinal nerve and facet joint injections. 31 Robots have also been used for perioperative endovascular steering in cardiology and aortic grafting. 32 33 34

Image-guided navigation tools, a form of AI, have also been studied. 18 One type is electromagnetic navigation (a similar technology to satellite-guided Global Positioning System), which is used in intraprocedural CT biopsy tracking. 35 Another tool used with CT and MR is optical tracking (e.g., the Activiews CT-Guide needle guidance system; Stryker Corporation, Kalamazoo, MI), which uses a video camera with skin markers for localization. 18 36 Laser guidance with conventional and cone-beam CT is also commercially available. 37 38

Recently, automation in the form of touchless interaction software has allowed the operator to be able to perform procedures without placing his/her hands on equipment. 39 Voice recognition software has come into play in this arena. 40 Virtual reality is on the horizon, now being used to navigate and to artificially construct blood vessels into three-dimensional renditions made available to the interventionalist in real time. 41

Resident Training

Hospital demand for fast and accurate imaging turnaround time and reliable procedural technical success has resulted in increased involvement of attending physician in tasks previously completely delegated to resident physicians. This inevitably will take its toll on resident education. AI has thus been investigated to provide simulated learning alternatives for residents. 42 The efficacy of robotic guidance in virtual surgical simulators has been studied. 43 Video games (which utilize AI) have been implemented in diagnostic radiology education. 44 Others are currently utilizing AI with ML algorithms for resident performance evaluation. 45 AI has even been investigated at the medical student level for scoring written tests and automating the assessment of procedures, communication (via facial recognition or linguistic analysis), and history-taking skills (speech to text). 46 As with other applications of AI, meticulous cross-validation is required to ensure fairness.

The Pitfalls of AI

AI, in all its facets, is a powerful tool. Interventionalists, diagnosticians, and specialists now have a far greater potential to provide excellent patient care owing to the advances in robotics, ML, and DL. However, no tool is perfect. Major pitfalls of AI come in the form of lack of data and lack of direction. AI can be useful only if it answers a clinically relevant question, has robust reliable data, strict definitions, meticulous cross-validation, and appropriate training. 4 In sum, garbage in equals garbage out.

Second, as AI grows and enters the corporate space, patient integrity must remain of utmost importance. Interestingly, the advent of big data (e.g., Google in 1998) took place only a few years after the Health Insurance Portability and Accountability Act (HIPAA) of 1996. As big data matured, patient privacy was clearly not at the forefront of these companies' agenda. There are no technical standards on how to de-identify medical images. Are data really de-identifiable? How would de-identified labels be generated? Because AI today requires volumes of well-partitioned data, vast monetary value has been placed on these data. This is where the corporate sector has taken the lead. Mishandling that data can be ethically detrimental. 47 One of our greatest challenges is how to prevent unethical use of data for monetary profit.

Third, it is important to note that all data are inherently biased. For example, say the question arises of who should be the next chairman of a company. Deciding to use AI to make an unbiased selection may seem appealing, but imagine if the machine was trained by data based on prior selections. Imagine if the prior selections all happened to be males. Inevitably, the machine would develop an algorithm that is biased toward males. Similar issues arise in disease diagnosis and treatment algorithms. 48 An important example occurred when an AI software was trained with multiple chest radiographs to recognize pulmonary tuberculosis. Little did the investigators know, all of the positive radiographs came from a clinic in which the radiographs were labeled “TB clinic.” Therefore, any patient from the TB clinic was biased with a positive result. 49 We must be wary of the data we provide to machines and be vigilant of the limitations of AI.

In addition, it is important for the clinician to be aware of automation bias: the tendency for humans to favor machine-generated decisions over human decisions. This can lead to lack of monitoring of AI and blind agreement with its decisions. In IR, this may be seen when the interventionalist trusts an erroneously favorable pre-radioembolization lung shunt fraction when there was clear evidence of arteriovenous shunting during the procedure. Additionally, vessel tracking software is prone to errors; an operator must not trust a false target vessel acquired over his/her angiographic experience.

Finally, although there is no magic to AI (just a lot of math), it is difficult to discern how exactly the machine bases its patterns, especially with proprietary DL and GANs that autonomously collect and categorize information. This is the so-called black box of AI. 5 This no-man's land (what ML experts refer to as “lack of explainability”) lends itself to uncertainty. We have a moral duty both to understand the data put into AI and how that data are being processed. 50

Conclusion: What Does the Future Hold?

One important take-home point is that AI, much like the field of IR itself, is a discipline with a multitude of diverse branches. All subsets of AI strive to develop machines with (seemingly) cognitive capabilities. As time and generations pass, and as technology evolves, so will machines. The primary determinant of evolution is the origin, what we put into the system. Machines need to be cautiously equipped with reliable, unbiased data and with clear tasks. In healthcare, we need to respect our patients' privacy when implementing the technology and always place patients' well-being over profit. We must not be overtaken by the “black box” of AI. We must demand transparency in our algorithms, or risk losing privacy, security, and impartiality.

IR was born from innovation. Our specialty thrives on the principles of AI. Whether it be in imaging, robotics, or clinical decision making, the common principle of thinking outside of the box comes into play. It is the duty of the interventional radiologist to understand the available technology and collaborate with experts in the multifaceted branches of AI. It is difficult to predict what is next for AI: more robust ML and algorithms, more powerful software, more capable robotics, further commercialization, the Hollywoodesque Artificial General Intelligence (unlikely). The evolution of AI will depend on unmet needs, changing times, and human pioneers. Interventional radiologists must identify these unmet needs, take actions to address them, and ultimately improve the delivery of patient care.

Role of Funding

There was no funding provided for this study.

Footnotes

Conflict of Interest None declared.

References

  • 1.Charles R, ed. Artificial intelligence and IR. 5th Annual Symposium on Academic Interventional Radiology; 2018 September 7–9; Washington, DC
  • 2.Mendel J VS, ed. Learning AI from the experts: becoming an AI leader in global radiology (without needing a computer science degree) (Sponsored by the RSNA Committee of International Radiology Education). Radiological Society of North America 2019 Scientific Assembly and Annual Meeting; December 1–December 6, 2019; Chicago IL
  • 3.Museum C H. John McCarthy 2020 [May 23, 2020]. Accessed June 16, 2022 at:https://computerhistory.org/profile/john-mccarthy/
  • 4.Andriole K.Nuts and Bolts of Machine Learning and Artificial Intelligence Chicago, IL: RSNA; 2019. December 1 [Google Scholar]
  • 5.Handelma G S, Kok H K, Chandra R V.Peering into the black box of artificial intelligence: evaluation metrics of machine learning methods AJR Am J Roentgenol 2019212(1):38–43. [DOI] [PubMed] [Google Scholar]
  • 6.Kitamura FC, edAI in Healthcare: Advanced Topics RSNA; Chicago, IL [Google Scholar]
  • 7.Iezzi R, Goldberg S N, Merlino B, Posa A, Valentini V, Manfredi R. Artificial intelligence in interventional radiology: a literature review and future perspectives. J Oncol. 2019;2019:6.153041E6. doi: 10.1155/2019/6153041. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Phung R. A high-accuracy model average ensemble of convolutional neural networks for classification of cloud image patches on small datasets. Appl Sci (Basel) 2019;9:4500. [Google Scholar]
  • 9.Erickson B , ed. RSNA AI Deep Learning Lab: Beginner Class: Classification Task (Intro). Radiological Society of North America's 2019 Scientific Assembly and Annual Meeting; December 1–December 6, 2019; Chicago, IL
  • 10.Erickson B J, ed. RSNA AI Deep Learning Lab: Generative Adversarial Networks (GANs). Radiological Society of North America's 2019 Scientific Assembly and Annual Meeting; December 1–December 6, 2019; Chicago, IL
  • 11.Shih G , ed. RSNA AI Deep Learning Lab: Segmentation. Radiological Society of North America's 2019 Scientific Assembly and Annual Meeting; December–December 6, 2019; Chicago, IL
  • 12.Society of Interventional Oncology . Artificial Intelligence Hackathon at SIO2020
  • 13.Meek R D, Lungren M P, Gichoya J W. Machine learning for the interventional radiologist. AJR Am J Roentgenol. 2019;213(04):782–784. doi: 10.2214/AJR.19.21527. [DOI] [PubMed] [Google Scholar]
  • 14.Monsky W L, Garza A S, Kim I. Treatment planning and volumetric response assessment for yttrium-90 radioembolization: semiautomated determination of liver volume and volume of tumor necrosis in patients with hepatic malignancy. Cardiovasc Intervent Radiol. 2011;34(02):306–318. doi: 10.1007/s00270-010-9938-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Cho H, Lee J G, Kang S J. Angiography-based machine learning for predicting fractional flow reserve in intermediate coronary artery lesions. J Am Heart Assoc. 2019;8(04):e011685. doi: 10.1161/JAHA.118.011685. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Sundararajan S H, McClure T D, Winokur R S, Kishore S A, Madoff D C. Extrahepatic clinical application of vessel tracking software and 3D roadmapping tools: preliminary experience. J Vasc Interv Radiol. 2019;30(07):1021–1026. doi: 10.1016/j.jvir.2018.11.039. [DOI] [PubMed] [Google Scholar]
  • 17.Monfardini L, Orsi F, Caserta R, Sallemi C, Della Vigna P, Bonomo G. Ultrasound and cone beam CT fusion for liver ablation: technical note. Int J Hyperthermia. 2018;35(01):500–504. doi: 10.1080/02656736.2018.1509237. [DOI] [PubMed] [Google Scholar]
  • 18.Chehab M A, Brinjikji W, Copelan A, Venkatesan A M. Navigational tools for interventional radiology and interventional oncology applications. Semin Intervent Radiol. 2015;32(04):416–427. doi: 10.1055/s-0035-1564705. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Wijnands T F, Ronot M, Gevers T J. Predictors of treatment response following aspiration sclerotherapy of hepatic cysts: an international pooled analysis of individual patient data. Eur Radiol. 2017;27(02):741–748. doi: 10.1007/s00330-016-4363-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Kontogianni K, Russell K, Eberhardt R. Clinical and quantitative computed tomography predictors of response to endobronchial lung volume reduction therapy using coils. Int J Chron Obstruct Pulmon Dis. 2018;13:2215–2223. doi: 10.2147/COPD.S159355. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Gordic S, Corcuera-Solano I, Stueck A. Evaluation of HCC response to locoregional therapy: validation of MRI-based response criteria versus explant pathology. J Hepatol. 2017;67(06):1213–1221. doi: 10.1016/j.jhep.2017.07.030. [DOI] [PubMed] [Google Scholar]
  • 22.Lencioni R, Montal R, Torres F. Objective response by mRECIST as a predictor and potential surrogate end-point of overall survival in advanced HCC. J Hepatol. 2017;66(06):1166–1172. doi: 10.1016/j.jhep.2017.01.012. [DOI] [PubMed] [Google Scholar]
  • 23.Galastri F L, Nasser F, Affonso B B. Imaging response predictors following drug eluting beads chemoembolization in the neoadjuvant liver transplant treatment of hepatocellular carcinoma. World J Hepatol. 2020;12(01):21–33. doi: 10.4254/wjh.v12.i1.21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Kallini J R, Gabr A, Hickey R. Indicators of lung shunt fraction determined by technetium-99 m macroaggregated albumin in patients with hepatocellular carcinoma. Cardiovasc Intervent Radiol. 2017;40(08):1213–1222. doi: 10.1007/s00270-017-1619-z. [DOI] [PubMed] [Google Scholar]
  • 25.Abajian A, Murali N, Savic L J. Predicting treatment response to intra-arterial therapies for hepatocellular carcinoma with the use of supervised machine learning - an artificial intelligence concept. J Vasc Interv Radiol. 2018;29(06):850–8570. doi: 10.1016/j.jvir.2018.01.769. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Zhong B Y, Ni C F, Ji J S. Nomogram and artificial neural network for prognostic performance on the albumin-bilirubin grade for hepatocellular carcinoma undergoing transarterial chemoembolization. J Vasc Interv Radiol. 2019;30(03):330–338. doi: 10.1016/j.jvir.2018.08.026. [DOI] [PubMed] [Google Scholar]
  • 27.Hricak H. 2016 new horizons lecture: beyond imaging-radiology of tomorrow. Radiology. 2018;286(03):764–775. doi: 10.1148/radiol.2017171503. [DOI] [PubMed] [Google Scholar]
  • 28.Kassamali R H, Ladak B. The role of robotics in interventional radiology: current status. Quant Imaging Med Surg. 2015;5(03):340–343. doi: 10.3978/j.issn.2223-4292.2015.03.15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Bricault I, Zemiti N, Jouniaux E. Light puncture robot for CT and MRI interventions: designing a new robotic architecture to perform abdominal and thoracic punctures. IEEE Eng Med Biol Mag. 2008;27(03):42–50. doi: 10.1109/EMB.2007.910262. [DOI] [PubMed] [Google Scholar]
  • 30.Neubach Z, Shoham M. Ultrasound-guided robot for flexible needle steering. IEEE Trans Biomed Eng. 2010;57(04):799–805. doi: 10.1109/TBME.2009.2030169. [DOI] [PubMed] [Google Scholar]
  • 31.Stoianovici D, Cleary K, Patriciu A. AcuBot: a robot for radiological interventions. IEEE Trans Robot Autom. 2003;19(05):927–930. [Google Scholar]
  • 32.Smilowitz N R, Weisz G. Robotic-assisted angioplasty: current status and future possibilities. Curr Cardiol Rep. 2012;14(05):642–646. doi: 10.1007/s11886-012-0300-z. [DOI] [PubMed] [Google Scholar]
  • 33.Riga C V, Bicknell C D, Wallace D, Hamady M, Cheshire N. Robot-assisted antegrade in-situ fenestrated stent grafting. Cardiovasc Intervent Radiol. 2009;32(03):522–524. doi: 10.1007/s00270-008-9459-5. [DOI] [PubMed] [Google Scholar]
  • 34.Riga C V, Cheshire N J, Hamady M S, Bicknell C D.The role of robotic endovascular catheters in fenestrated stent grafting J Vasc Surg 20105104810–819., discussion 819–820 [DOI] [PubMed] [Google Scholar]
  • 35.Hiraki T, Kamegawa T, Matsuno T, Komaki T, Sakurai J, Kanazawa S. Zerobot®: a remote-controlled robot for needle insertion in CT-guided interventional radiology developed at Okayama University. Acta Med Okayama. 2018;72(06):539–546. doi: 10.18926/AMO/56370. [DOI] [PubMed] [Google Scholar]
  • 36.Appelbaum L, Mahgerefteh S Y, Sosna J, Goldberg S N. Image-guided fusion and navigation: applications in tumor ablation. Tech Vasc Interv Radiol. 2013;16(04):287–295. doi: 10.1053/j.tvir.2013.08.011. [DOI] [PubMed] [Google Scholar]
  • 37.Moser C, Becker J, Deli M, Busch M, Boehme M, Groenemeyer D H. A novel Laser Navigation System reduces radiation exposure and improves accuracy and workflow of CT-guided spinal interventions: a prospective, randomized, controlled, clinical trial in comparison to conventional freehand puncture. Eur J Radiol. 2013;82(04):627–632. doi: 10.1016/j.ejrad.2012.10.028. [DOI] [PubMed] [Google Scholar]
  • 38.Ritter M, Rassweiler M C, Häcker A, Michel M S. Laser-guided percutaneous kidney access with the Uro Dyna-CT: first experience of three-dimensional puncture planning with an ex vivo model. World J Urol. 2013;31(05):1147–1151. doi: 10.1007/s00345-012-0847-8. [DOI] [PubMed] [Google Scholar]
  • 39.Mewes A, Hensen B, Wacker F, Hansen C. Touchless interaction with software in interventional radiology and surgery: a systematic literature review. Int J CARS. 2017;12(02):291–305. doi: 10.1007/s11548-016-1480-6. [DOI] [PubMed] [Google Scholar]
  • 40.El-Shallaly G EH, Mohammed B, Muhtaseb M S, Hamouda A H, Nassar A HM. Voice recognition interfaces (VRI) optimize the utilization of theatre staff and time during laparoscopic cholecystectomy. Minim Invasive Ther Allied Technol. 2005;14(06):369–371. doi: 10.1080/13645700500381685. [DOI] [PubMed] [Google Scholar]
  • 41.MedImaging.net . New VR Technology Puts Interventional Radiologists Inside 3D Blood Vessels
  • 42.Chokshi FH , ed. Artificial Intelligence and Precision Education: How AI Can Revolutionize Training in Radiology. Radiological Society of North America's 2019 Scientific Assembly and Annual Meeting; December 1–December 6, 2019; Chicago, IL
  • 43.Lau J W, Yang T, Toe K K, Huang W, Chang S K. Can robots accelerate the learning curve for surgical training? An analysis of residents and medical students. Ann Acad Med Singap. 2018;47(01):29–35. [PubMed] [Google Scholar]
  • 44.Awan O, Dey C, Salts H. Making learning fun: gaming in radiology education. Acad Radiol. 2019;26(08):1127–1136. doi: 10.1016/j.acra.2019.02.020. [DOI] [PubMed] [Google Scholar]
  • 45.Neurosurgical Simulation & Artificial Intelligence Learning Centre . Bissonnette V, Mirchi N, Ledwos N, Alsidieri G, Winkler-Schwartz A, Del Maestro R F. Artificial intelligence distinguishes surgical training levels in a virtual reality spinal task. J Bone Joint Surg Am. 2019;101(23):e127. doi: 10.2106/JBJS.18.01197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Williamson D M, Xi X, Breyer F J. A framework for evaluation and use of automated scoring. Educ Meas. 2012;31(01):2–13. [Google Scholar]
  • 47.Geis J R, Brady A P, Wu C C. Ethics of artificial intelligence in radiology: summary of the Joint European and North American Multisociety Statement. Radiology. 2019;293(02):436–440. doi: 10.1148/radiol.2019191586. [DOI] [PubMed] [Google Scholar]
  • 48.Geis J R, Brady A P, Wu C C. Ethics of artificial intelligence in radiology: summary of the Joint European and North American Multisociety Statement. J Am Coll Radiol. 2019;16(11):1516–1521. doi: 10.1016/j.jacr.2019.07.028. [DOI] [PubMed] [Google Scholar]
  • 49.Kahn CE . Welcome to the Radiology: AI podcast [Internet]: RSNA2020 April 3. Podcast
  • 50.Gichoya J W, Kahn C E.Ethics of AI in Radiology: Summary of the European and North American Multisociety Statement RSNA; Chicago, IL: 2019. December 1 [Google Scholar]

Articles from Seminars in Interventional Radiology are provided here courtesy of Thieme Medical Publishers

RESOURCES