Skip to main content
Communications Medicine logoLink to Communications Medicine
. 2021 Jun 30;1:8. doi: 10.1038/s43856-021-00003-5

How will artificial intelligence change medical training?

Shinjini Kundu 1,
PMCID: PMC9053201  PMID: 35602202

Artificial intelligence is changing medicine and it will relieve physicians from the burden of rote knowledge. Here, I discuss how this might affect medical training, drawing from the example of how automation in aviation redefined the role of the pilot.

Subject terms: Preventive medicine, Diagnostic markers


Kundu discusses how artificial intelligence will transform medical practice and doctors’ training. The author explores the changing role of the clinician in the doctor-patient relationship, drawing parallels with the role of the pilot in light of increased automation in aviation.


Technology inevitably shapes human behavior, and artificial intelligence (AI) is at the cusp of transforming medical practice. Continuous advances in monitoring health and disease have made medicine more precise, but they have also left doctors befuddled by mountains of healthcare data and ever-expanding medical knowledge that is becoming increasingly difficult to master and interpret. Medical experts sift through this vast amount of healthcare data to make diagnosis and treatment choices based on the most recent medical knowledge. The diagnosis and the treatment decisions are referred to as data labels in AI. From healthcare data labeled in this way, AI can learn the decision patterns to apply them to future cases on new data, unseen in the training set. AI is capable of generalizing and making decisions in unpredictable circumstances—in contrast to simple automation, which is good at exact routine tasks for which it has been designed but lacks the generalization capacity to make decisions in unforeseen contexts. Since the first medical device using AI was approved by the Food and Drug Administration (FDA) in 20161, a rapid surge in the number of medical devices using AI has ensued; the number of publications using AI, including machine learning, in the life sciences increased >20-fold from 2010 to 20191. AI can be applied to a wide range of healthcare activities, including treatment recommendations, patient monitoring, adherence checking, and medical record keeping. This is propelling clinical practice to an age where information and data are handled by machines, enabling post-knowledge physicians to focus on other things. Now that routine cognitive tasks can be assigned to machines, this is an opportunity to reconsider how medical schools should train doctors.

AI could alter the physician’s role in a similar way to how general automation altered the pilot’s role several decades ago.

In medicine, the need for “machine assistance” has undoubtedly never been greater. First, the doubling time of medical knowledge is merely 73 days today compared to 50 years in 19502. Medical students would need to study >29 h every weekday to keep up with the primary care literature3. In fact, too much information drives clinical specialization. Today, 88% of internal medicine residents specialize, up from 7% in 1951–19604. Second, as people live longer, they are more likely to develop multiple comorbidities, necessitating a tailored approach to treatment. Lastly, medical data in electronic medical records are growing at an unprecedented rate, in part due to the adoption of high-resolution imaging modalities, next-generation sequencing, and other technologies, as well as a wider array of tests and medications at clinicians’ disposal. No wonder freshly clad interns are seen with white coat pockets overflowing with flashcards, rounding notes, fishbone laboratories, and guides for drug dosing—the first year of residency is largely a training in the art of culling data.

One way to deal with this medical data and information explosion is to use AI to help make better sense of it. For example, there is already proof that AI can assimilate medical knowledge required for clinical thinking5. Last year, a company developed an AI system that outperformed doctors on a mock test of clinical reasoning6. The new Digital Health Applications Precertification Program of the FDA is also likely to speed up the transition of software as medical device to bedside7. In addition, tomorrow’s doctors may have access to a wider portfolio of assistive devices. Speech recognition is already automating clinical interview transcriptions. A smart electronic medical record may prompt a doctor to ask specific questions based on symptoms and might even suggest tests and diagnoses. While current prognostic calculators use only 5–10 variables, AI-based calculators could include substantially more, improving accuracy. Given more accurate disease assessment, smart tools could then recommend a menu of treatments considering patients’ allergies, current medications, and medical comorbidities. Dosage guidance could automatically account for patient weight, gender, and drug metabolism and excretion as relevant. Potentially, AI could help refine best practices by optimizing the scope and setting for newer treatment modalities, such as cancer immunotherapies. As AI becomes more widely used, knowledge and data may no longer differentiate the skill levels of physicians. As AI becomes part of the team and extends physicians’ skill sets, medical schools will need to emphasize a new set of competencies to keep up with how medical practice would evolve in the post-knowledge age.

In some ways, AI could alter the physician’s role in a similar way to how general automation altered the pilot’s role several decades ago. Initially, an aircraft was powered by a stick and rudder and the pilot’s intuition about how best to handle them was essential. Pilots learned basic psychomotor skills of keeping an aircraft suspended in flight, gauging the effect of multiple forces on a plane by gestalt8—so-called stick-and-rudder flying. However, as flying continually became more complex, with new components to control, automation enabled aircrafts to constantly adjust parameters, such as fuel efficiency, aviation, navigation, and communication with air controllers in real time through interacting automated systems8. Today, being a pilot entails seamlessly switching between stick-and-rudder skills and flight deck management as informed by system feedback. An effective mission relies on the interaction between the human and automated parts—not one or the other alone. Such automation brought clear advantages; simpler interfaces for humans, standardization, enhanced operational efficiency, and increased safety as nearly 80% of accidents were attributable to human error8. However, both too little and too much system feedback proved to be dangerous: too much could confuse the pilot and cause a crash, while too little may deter the pilot from acting on a system error. For example, the recent crashes of two Boeing 737 MAX aircraft just 4 months apart were caused by failure of a single malfunctioning sensor. Normally, planes rely on redundant systems to eliminate the risk of single point of failure, but the 737 MAX’s software relied on a single sensor despite having two. Pilots struggled to maneuver the plane in the absence of meaningful feedback, resulting in crashes9. This example should serve as a warning about the risks of using AI to make healthcare decisions, which should be factored in when building such a system.

Application of AI in healthcare comes with new risks. Overreliance on AI may reduce physicians’ situational awareness and create significant risks of being blindsided. Another risk of depending on AI is that, if it ceases to operate or is no longer capable of delivering the required services, there needs to be a safety valve—which means experts will still be needed. Furthermore, AI systems add another dimension beyond automation as they continuously learn from data. While AI programs could offer one of the biggest advantages by curating the best information through data-driven practices, they can also make errors on a large scale. Therefore, AI must always be a feedback-driven system, with users having a right to notify when AI-driven decisions are incorrect, so that the model can learn from its mistakes in subsequent iterations of training. There are also ethical considerations regarding the use of AI in healthcare. Physicians and patients often make trade-offs when deciding on treatments; one example is between quality of life and length of life. As a result, there is no such thing as a one-size-fits-all approach to patient treatment. It is important for AI systems to capture the complexity of multiple-choice scenarios, and when a medical decision necessitates a trade-off, it must still be delegated to the stakeholders.

Medical students play a pivotal role as they train with a plethora of new devices. Keeping patients at the center of the mission, doctors-in-training could learn how to manage patient data more like managing the signs on a flight deck—exploring the impact of multiple influences on patient health, such as social determinants, clinical diagnosis and care, timely decisions, and teamwork with other health professionals8. In post-knowledge medicine, the focus of training might shift from biology to psychology and sociology, focusing on empathy and a greater understanding of socioeconomic structures. Knowledge-centered or even therapeutic interaction has a smaller impact on wellbeing than is commonly assumed. According to some studies, symptom and knowledge-centered treatments have just 10–15% impact on health10, while a combination of social determinants makes the biggest difference in results11. Only when physicians are better equipped to consider the various data components of a patient’s health and to resolve hidden barriers to health, such as lack of access to medicine, transportation, adequate nutrition, and undiagnosed complex diseases, will they be able to concentrate on long-term patient well-being. This is only possible if a physician can work interactively with AI to develop a treatment plan that is specifically tailored for a patient’s needs. Further, medical education must teach students how to scrutinize and crosscheck knowledge and data, as well as recognize when they must fall back on stick-and-rudder medical skills, where to find the experts, and when to seek collaboration with other members of healthcare team to address the hidden barriers to health.

Finally, some could argue that, as the physician’s role changes to that of a patient healing supervisor, doctors will become less happy in their jobs. In aviation, pilots sometimes became bored and disengaged while serving as flight deck supervisors8. “Human-centered automation,” which compensates for human operators’ shortcomings via visual assistance and alerts8, provided a solution that effectively engaged pilots and enriched their skill set. Remembering the human at the center is a keystone opportunity to return medicine to its original ethos. Today, intern physicians strike a frenzied balance between administrative work, education, charting, and other activities, leaving as little as 12% of time for direct patient care12. Yet, each year, thousands of newly minted medical students recite the Hippocratic Oath from one of the oldest texts in history. The hallowed bond between the ill and healing practitioner has remained steadfast, defining the heart of the profession. When physicians give time to the patient–doctor relationship, outcomes improve13. If AI can relieve physicians of duties competing for physician time, then post-knowledge medicine will return physicians to the bedside—where the sacred relationship between an ill patient and a compassionate physician started.

Acknowledgements

I thank Richard Steinman, MD, PhD and Ryan England, MD for their input. They were not compensated for contributions.

Competing interests

The author declares no competing interests.

Footnotes

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digital Med. 2020;3:1–8. doi: 10.1038/s41746-020-00324-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Densen P. Challenges and opportunities facing medical education. Trans. Am. Clin. Climatol. Assoc. 2011;122:48. [PMC free article] [PubMed] [Google Scholar]
  • 3.Alper BS, et al. How much effort is needed to keep up with the literature relevant for primary care? J. Med. Libr. Assoc. 2004;92:429. [PMC free article] [PubMed] [Google Scholar]
  • 4.Dalen JE, Ryan KJ, Alpert JS. Where have the generalists gone? They became specialists, then subspecialists. Am. J. Med. 2017;130:766–768. doi: 10.1016/j.amjmed.2017.01.026. [DOI] [PubMed] [Google Scholar]
  • 5.Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA. The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care. Nat. Med. 2018;24:1716–1720. doi: 10.1038/s41591-018-0213-5. [DOI] [PubMed] [Google Scholar]
  • 6.CloudMedx Inc. AI outperforms human doctors on a medical exam. medium.com/cloudmedx/ai-outperforms-human-doctors-on-a-us-medical-exam-31b916666b3d (2019).
  • 7.U.S. Food and Drug Administration. Developing a software precertification program: a working model. www.fda.gov/media/119722 (2019).
  • 8.Billings, C. E. Human-centered aviation automation: principles and guidelines. https://core.ac.uk/download/pdf/42778349.pdf (1996).
  • 9.Gates, Dominic. Q&A: What led to Boeing’s 737 MAX crisis. The Seattle Times (18 November 2020).
  • 10.McGinnis JM, Williams-Russo P, Knickman JR. The case for more active policy attention to health promotion. Health Aff. 2002;21:78–93. doi: 10.1377/hlthaff.21.2.78. [DOI] [PubMed] [Google Scholar]
  • 11.Braveman P, Gottlieb L. The social determinants of health: it’s time to consider the causes of the causes. Public Health Rep. 2014;129(1_suppl2):19–31. doi: 10.1177/00333549141291S206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Block L, et al. In the wake of the 2003 and 2011 duty hours regulations, how do internal medicine interns spend their time? J. Gen. Intern. Med. 2013;28:1042–1047. doi: 10.1007/s11606-013-2376-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Dugdale DC, Epstein R, Pantilat SZ. Time and the patient–physician relationship. J. Gen. Intern. Med. 1999;14(Suppl 1):S34. doi: 10.1046/j.1525-1497.1999.00263.x. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Communications Medicine are provided here courtesy of Nature Publishing Group

RESOURCES