Skip to main content
F&S Reports logoLink to F&S Reports
editorial
. 2023 Aug 9;4(3):239–240. doi: 10.1016/j.xfre.2023.08.004

Artificial intelligence in medicine: it is neither new, nor frightening

Richard J Paulson 1
PMCID: PMC10504549  PMID: 37719090

The sudden interest in Artificial Intelligence (AI) is nothing short of amazing. A quick search of “Artificial Intelligence” on PubMed on July 25, 2023 revealed 206,353 results; nine days later, on August 3, 2023, that same search revealed 207,281 titles. That is nearly a thousand new titles in nine days, just in journals listed by PubMed! Computers have been with us for nearly a century, steadily improving their abilities over time. We have been using them for electronic medical record (EMR), communicating with patients, and collecting data. No one has expressed any panic that they are about to take over the world until now. Is it the perception that computers can now speak that casts fear into the hearts of computer users? After all, if the machines can speak, then perhaps they can reason. How soon before they can outwit us?

It is clearly the public release of ChatGPT that has captured the zeitgeist of the 2020s. In our own field of reproductive medicine, AI fever has arrived. There are 262 titles in the Fertility and Sterility family of journals that appear in a search for “Artificial Intelligence” on fertstert.org, and AI has been selected as the topic of a recent Views and Reviews (1). It must be that previously, computers were tools used for calculating, organizing, and sharing information. When they spoke, it was automated speech, which was not intimidating. Now that the computer can communicate in idiomatic English and write sentences, paragraphs, and entire manuscripts, a significant part of our species is suddenly worried about being surpassed by the technology, possibly even being made irrelevant, cancelled, or eliminated. The fear of AI grows deeper when it is combined with Machine Learning (ML), sometimes called Deep Learning. This combination (AI/ML) further intimidates the imagination of our species with AI potentially accumulating new skills and developing superhuman abilities, some of which could be sinister. In my skeptical view, all this is very overstated.

May I suggest that we embrace AI for the benefits that it can bring and not fear it as a Pandora’s Box (2). Perhaps, the problem lies in calling AI “intelligent.” If “intelligence” is measured by the recollection of data, then the internet indeed possesses unprecedented “intelligence.” But, “data regurgitation” is not insight, and it is not intelligence, or Google would have taken over the world some time ago. Those who fear AI point out that it can pass college entrance examinations, intelligence quotient (IQ) examinations, even bar examinations. That is not a statement about AI, but rather about the flawed structure of standardized tests. There are many examples of geniuses who failed their college entrance examinations. Rather than trying to regulate AI development, we should change the way we test prospective applicants.

There is great potential for human benefit from the ever-increasing ability of computers to manage data, search databases, calculate probabilities, and recognize patterns. Humans have lots of weaknesses in these areas. Human observers are terrible at noticing things, remembering things, or even just recording observations in an unbiased manner. Eyewitnesses to crimes remember different versions of events. Humans need double-blind, randomized trials just to avoid bias in data collection. Pattern recognition is also not “intelligence,” but it is valuable and has specific applications in medicine. Every electrocardiogram already comes with a machine interpretation of the various intervals and suggestions for possible diagnoses. The same will be true when we have computers reading X-rays, ultrasounds, and various imaging scans. A recent study of a ML-based early warning system for sepsis actually demonstrated improved outcomes, including decreased mortality, when antibiotics were started within 3 hours of the machine-generated alert (3). This proves the value of allowing AI to learn to notice patterns in disparate bits of data, and then warn human physicians that it may be the time to act.

In the field of reproductive medicine, there are a myriad of potential applications of pattern recognition, ranging from precycle testing to monitoring ovarian stimulation cycles to the observation of gametes and embryos (4). Time-lapse imaging of embryos is already an excellent example of AI, and even though its impact on outcome continues to be debated, there is significant value in being able to observe embryos over time without having to open the incubator. Every piece of information in the laboratory or in the clinic is useful potentially, although sorting through it is tedious. Allowing AI to notice critical laboratory values, the unnoticed Rh factor in a patient with a miscarriage, and even more complex patterns of data can be very useful in guiding clinical decisions. Although AI is transparently fallible and dependent on its programming and limited sources of information, physicians already know that clinical decisions are their responsibility ultimately. Perhaps, the public needs to be reassured that physicians have no desire to be replaced. More importantly, the practice of medicine is still far too complex for AI and ML, and we should not close our medical education system.

In my optimistic view, AI will make lots of things better for us and our patients. There is one cautionary tale: I was optimistic about the implementation of the electronic medical record (EMR), which would decrease errors in transcribing laboratory results, allow us to access patient charts from anywhere in the world, and facilitate a quick communication between personal health information and patients. Some of the ideals of EMR have materialized, but the interface to EMR has proven to be clumsy, difficult to navigate, and very time-consuming. It is so complicated and nonintuitive that new doctors have to take classes to learn how to manage data entry and retrieval. Instead of looking at patients during their office visits, doctors spend the time staring at yet another screen. This subversion of the patient-doctor relationship actually provides a great opportunity for AI. Instead of clicking our way through the patient chart, verbal commands should produce the latest set of patient laboratory results. Instead of scribes, AI should generate entries into the EMR in real time without the need for subsequent data entry by the physician. The key will be in the details of how AI is incorporated into medical practice, and this is the caveat. When we allow the AI platforms to be built by the people who built our EMR, we are at a risk of being subjugated not by the AI technology but by the computer interface. Artificial intelligence needs to be intuitive, and ML needs to learn the idiosyncrasies of medical practice. Humans are inherently imperfect, so doctors will surely be “imperfect users” (5). That just means that AI and ML need to be adaptable. Let us get it right this time and let the computer learn how to speak to physicians instead of the other way around.

References

  • 1.Cedars M.I. Artificial intelligence in assisted reproductive technology: how best to optimize this tool of the future. Fertil Steril. 2023;120:1–2. doi: 10.1016/j.fertnstert.2023.05.150. [DOI] [PubMed] [Google Scholar]
  • 2.Cooper A., Rodman A. AI and medical education – A 21st-century pandora's box. N Engl J Med. 2023;389:385–387. doi: 10.1056/NEJMp2304993. [DOI] [PubMed] [Google Scholar]
  • 3.Adams R., Henry K.E., Sridharan A., Soleimani H., Zhan A., Rawat N., et al. Prospective, multisite study of patient outcomes after implementation of the TREWS machine learning-based early warning system for sepsis. Nat Med. 2022;28:1455–1460. doi: 10.1038/s41591-022-01894-0. [DOI] [PubMed] [Google Scholar]
  • 4.Miloski B. Opportunities for artificial intelligence in healthcare and in vitro fertilization. Fertil Steril. 2023;120:3–7. doi: 10.1016/j.fertnstert.2023.05.006. [DOI] [PubMed] [Google Scholar]
  • 5.Kostick-Quenet K.M., Gerke S. AI in the hands of imperfect users. NPJ Digit Med. 2022;5:197. doi: 10.1038/s41746-022-00737-z. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from F&S Reports are provided here courtesy of Elsevier

RESOURCES