Skip to main content
Radiology: Artificial Intelligence logoLink to Radiology: Artificial Intelligence
. 2020 Jul 1;2(4):e190223. doi: 10.1148/ryai.2020190223

AI Hype and Radiology: A Plea for Realism and Accuracy

John Banja 1,
PMCID: PMC8082301  PMID: 33937835

This opinion piece is inspired by the old Danish proverb: “Making predictions is hard, especially about the future” (1). As every reader knows, the momentum of artificial intelligence (AI) and the eventual implementation of deep learning models seem assured. Some pundits have gone considerably further, however, and predicted a sweeping AI takeover of radiology. Although many radiologists support AI and believe it will enable greater efficiency, a recent study of medical students found very different reactions (2). While the sample size was small, a large number dismissed a potential career in radiology, perhaps because of warnings like these:

[It’s] quite obvious that we should stop training radiologists … [who are like] the coyote already over the edge of the cliff who hasn’t yet looked down (3).
 
[A] highly-trained and specialized radiologist may now be in greater danger of being replaced by a machine than his own executive assistant (4).
 
[T]he ultimate threat to radiology—the one that could actually end radiology as a thriving specialty—is machine learning (5).

While such doomsday predictions are understandably attention-grabbing, they are highly unlikely, at least in the short term. More concerning, such remarks may be frankly irresponsible, at least to the extent that they discourage promising medical students from careers in radiology. That radiology will be impacted by AI, especially by its machine and deep learning models, is beyond doubt. But the best-informed opinions suggest that AI might evolve into a radiologist’s “amiable apprentice” rather than an “awful adversary” (6). Here are some reasons.

Research and Development in AI

The recent pace of radiology AI research and development has been breathtaking. Anyone with even the faintest acquaintance with recent use reports is keenly aware of the exponentially increasing volume of research, trumpeting the successes of various diagnostic and prognostic models. For example, Kim notes that in 2008, radiology-specific AI articles numbered around 100; in 2018, they numbered around 700 (7). In their recent white paper on what radiologists should know about AI, the European Society of Radiology describes the “explosion in studies … for image interpretation that embrace disease detection and classification, organ and lesion segmentation … and assessment of response to treatment” (4).

Global investment in AI research and development has skyrocketed. Since 2010, 154 000 patents have been filed worldwide, with Microsoft filing 700 in 2018 alone (8). In 2013, venture capitalists staked 291 AI-related startups. In 2018, they staked 1028 (9). Between 2018 and 2019, organizations that incorporated AI technologies grew from 4% to 14%, although the majority of those have been in merchandising, shopping advisories, and automated customer service (10,11). Nevertheless, according to the Worldwide Semiannual Artificial Intelligence Systems Spending Guide, health care operations will witness the fastest growth in AI spending over the 2018–2022 cycle (11).

Image recognition technologies are among the most familiar AI models in health care research. But it is a long way from observing the success of an AI model in a research setting to implementing it in routine clinical practice. And it is a much longer way still to replacing human radiologic expertise.

Why AI Will Augment but Not Replace Radiologic Services Anytime Soon

We are currently in the age of “narrow” AI, with highly specialized applications (12). In single, well-circumscribed tasks, many models perform astonishingly well. Examples include Deep Blue in chess, Watson on Jeopardy!, and the recently Food and Drug Administration (FDA)–approved IDx-DR (IDX Technologies, Coralville, Iowa) system for diagnosing diabetic retinopathy (13). But to the extent that AI applications throughout health care remain narrowly focused, wholesale replacement of radiologists would require “models for thousands of potential findings across multiple modalities” (7). For example, well before the FDA’s approval of IDx-DR, ophthalmic commentators were quick to acknowledge that “this algorithm is not a replacement for a comprehensive eye examination, which has many components such as visual acuity, refraction, slitlamp examinations, and eye pressure measurements” (14). That point applies equally well to radiology: Just because a system excels in a very narrowly defined performance domain hardly means that its practitioners face extinction. Indeed, since its stone age inception, the essential purpose of technology has been to improve performance outcomes. Historically, that innovation has resulted in improved productivity and more rather than fewer jobs.

And that invites a second, related point. Because they are discretely focused on a unique clinical phenomenon, today’s narrow AI models cannot begin to navigate the breadth of clinical tasks that are part and parcel with routine radiologic care (3). The “hypologists” seem entirely oblivious to this—likely because they aren’t radiologists and don’t really understand radiology practice. Even if a plethora of models appeared that could detect and evaluate subtle lesions as well as an experienced radiologist, those applications wouldn’t be able to determine whether to image or not, how to best provide protocol for an examination, teach, and communicate findings to clinicians and patients (3,15). Doomsday forecasters err in believing that radiologists are like medieval scribes, hunched over their scriptorium tables all day long staring at images. Furthermore, interventional radiologists are virtually excluded from these AI-takeover conversations.

And this leads to a third, admittedly speculative but provocative point. Wim Naude, a professor of business and entrepreneurship at Maastricht University in the Netherlands, recently claimed that “true” innovation in machine learning is slowing (16). One reason may be that Moore’s law (of the 1960s, ie, that computer power would double every 18–24 months) is coming to an end (17). Thus, creating AI technologies that will assume the broad multitasking of today’s average clinician is presently impossible because it would require replacing silicon-based transistors with something that’s yet in a pre-embryonic stage of development (eg, spintronics, gallium nitride, organic biochips, carbon nanotubules, etc) (18).

However, perhaps a better explanation as to why innovation in AI may be slowing is that much of the private sector seems frankly disinterested. Today’s deep learning models appear increasingly focused on merchandizing applications that forecast product demand and facilitate sales rather than on humanitarian welfare concerns (11,19). Indeed, there is an ethical parallel here with at least 2 decades of criticism leveled at the pharmaceutical industry, whose “research” has often consisted in tweaking compounds already on the market to extend existing patents (20).

Tellingly, a National Security Commission panel recently chided representatives of Amazon, Google, Microsoft, and Oracle for their failure to aggressively pursue research that would advance the security and national defense interests of the United States (21). Underlining our earlier point on the relative immaturity of AI models to date, the panel noted that AI must transition itself from “a promising technological novelty to a mature technology.”

Conclusion

Some historical evidence for the observation that “Making predictions is hard, especially about the future” comes from the evolution of gene therapy. In 1996, the “father of gene therapy,” William French Anderson, predicted that “within 20 years … gene therapy will be used regularly to ameliorate—and even cure—many ailments” (22). By 2016, however, few gene therapies had even reached phase III clinical trials. And as of this writing, only a handful have been FDA approved (23).

It’s hard to predict the future, and what immensely complicates predictions over seemingly promising technologies like gene therapy or AI is how their complex construction will interface with other equally complex and dynamic technologies, all of which operate in an environment of unceasing economic and institutional flux (24). It remains anyone’s guess as to how AI applications will be affected by their integration with PACS, how liability trends or regulatory efforts will affect AI, whether reimbursement for AI will justify its use, how mergers and acquisitions will affect AI implementation, and how well AI models will accommodate ethical requirements related to informed consent, privacy, and patient access (25).

What seems ethically imperative at present, though, is a steady and informed rebuttal of AI hype, especially as it is aimed at image-dependent technologies like radiology. Today’s hospitals simply cannot function without radiologists, who are core to their diagnostic functions. To allow a deterioration in the quality of radiology services because of the promulgation of false narratives imperils the public welfare. Rather than being caricatured as in a state of near-future extinction, radiology might well advance to a new era of excellence, perhaps, as Curtis Langlotz recently put it, “elevating the cognitive universe of radiologists to the top of their license” (6). To us, that is the more likely prediction, and one that the next generation of prospective radiologists needs to hear.

Acknowledgments

Acknowledgments

The author is grateful to Richard Duszak, MD, and Rolf Dieter Hollstein, MD, for valuable editorial suggestions.

Footnotes

Support for this article was provided by an unrestricted grant by the Advanced Radiology Services Foundation in Grand Rapids, Michigan, for research on understanding the ethics of AI and radiology.

Disclosures of Conflicts of Interest: J.B. Activities related to the present article: author supported by Advanced Radiology Services Foundation grant (unrestricted grant that supports author’s publishing articles on ethics in artificial intelligence especially involving radiology practice). Activities not related to the present article: author occasionally consults for law offices on medical malpractice cases, typically involving physician and hospital defendants; author has consulted for the plaintiffs bar as well as for defendants; institution receives grant from National Center for Advancing Translational Science (covers a modest amount of author’s time to perform ethics consultations relevant to the NCATS grant); author occasionally invited to give presentations on forensic ethics at medical conferences, frequently receives an honorarium for these speaking opportunities; author received royalties from his book Medical Errors and Medical Narcissism from Jones and Bartlett Publishers; author may receive royalties from Johns Hopkins University Press for Patient Safety Ethics in the future. Other relationships: disclosed no relevant relationships.

References


Articles from Radiology: Artificial intelligence are provided here courtesy of Radiological Society of North America

RESOURCES