Skip to main content
Missouri Medicine logoLink to Missouri Medicine
letter
. 2023 Sep-Oct;120(5):349.

Future of AI in Medicine: New Opportunities & Challenges

Farzana Hoque 1, Nongnooch Poowanawittayakom 2
PMCID: PMC10569385  PMID: 37841576

Artificial intelligence (AI) is an ever-advancing domain with the immense capacity to revolutionize numerous facets of human life. The advancement of AI and Natural Language Processing (NLP) has given rise to innovative tools that can aid in this endeavor. Among them is ChatGPT, developed by OpenAI, Inc. in California, which is a large language model that generates human-like text based on given prompts. In the conversation between Dr. Hagan and ChatGPT regarding the potential role of AI in medicine, ChatGPT exhibits remarkable intellect and provides a comprehensive discussion on the advantages and drawbacks that AI technology brings to patient care and medical education.1 This dialogue offers an insightful exploration into the promise and potential perils that come hand in hand with integrating AI into the field of medicine. ChatGPT explains how AI algorithms can improve patient care and outcome by quickly and accurately analyzing medical images, helping doctors detect diseases, and developing appropriate treatment plans with cost reduction.1 However, alongside these benefits, during the conversation, ChatGPT doesn’t overlook the concerns and dangers associated with the use of AI in medicine. It empathizes with Dr. Hagan’s concern about plagiarism facilitated by AI systems, acknowledging the potential for individuals to exploit AI-generated content without detection.1

AI’s potential to provide unique perspectives through instantaneous data retrieval from vast datasets greatly benefits researchers. Writing a research paper is a technical undertaking for students and researchers, demanding meticulous organization of substantial data and proper formatting. Large language models can generate extensive amounts of coherent and grammatically accurate text, closely emulating human writing style and language patterns. As a result, AI and NLP can rapidly produce research paper paragraphs or even entire sections that are challenging to distinguish from human-written content. It significantly saves researchers and students valuable time during the writing process and streamlines the peer review procedure. One of the main advantages brought up by ChatGPT is the potential for AI to enhance medical education and the effectiveness of AI in analyzing vast amounts of data in the research fields.2,3 Despite significant advancements, Natural Language Processing (NLP) lacks the innate understanding of humans.4 AI-based models are not immune to errors and can misinterpret context; these models might even fabricate entirely fictional research, complete with imaginary study subjects and fabricated statistics in extreme cases.2,4 AI-generated plagiarism is a significant issue. To protect intellectual property rights, legal frameworks must adapt to address AI-generated content, establishing ownership regulations and liability attribution.2,5 AI lacks the criteria for traditional authorship, such as making substantial contributions and being accountable for accuracy and integrity.2,3 Properly acknowledging AI’s role, comprehensive discussions about authorship policies without compromising human authors’ credibility are urgent in the current era.3,5 AI algorithms learn from training data, and if that data contains biases, AI can amplify and perpetuate these biases, skewing research results. Addressing bias in AI is vital and requires meticulous data selection, ongoing monitoring for biases, and using fairness-aware algorithms to mitigate bias. Natural Language Processing (NLP) systems lack human comprehension capabilities and might easily overlook crucial contextual details. To address this challenge, some counterbalancing AI detector tools have been developed to distinguish between human-generated and AI-generated text. 4 While these tools provide a certain level of mitigation, a cause for concern arises with the availability of online paraphrasing tools, which can be employed to evade detection.

AI is a powerful tool that holds enormous potential to enhance patient care, medical education, research, diagnosis, treatment, and cost reduction. To combat AI-related misconduct, it is fundamental to establish robust and transparent mechanisms to detect and address pressing issues like plagiarism, fabrication, or falsification. Human oversight and ethical guidelines are essential to ensure that AI complements rather than replaces medical professionals’ expertise and intellectual rigor to ensure its responsible utilization for the betterment of healthcare.

References

  • 1.Hagan J. ChatGPT. The Promise & Perils of Artificial Intelligence: A Conversation with ChatGPT. Mo Med. 2023 [PMC free article] [PubMed] [Google Scholar]
  • 2.Dupps WJ., Jr Artificial intelligence and academic publishing. J Cataract Refract Surg. 2023;49(7):655–656. doi: 10.1097/j.jcrs.0000000000001223. [DOI] [PubMed] [Google Scholar]
  • 3.Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health. 2023;5(3):e105–e106. doi: 10.1016/S2589-7500(23)00019-5. [DOI] [PubMed] [Google Scholar]
  • 4.Narayanaswamy CS. Can We Write a Research Paper Using Artificial Intelligence? 2023;81(5):524–526. doi: 10.1016/j.joms.2023.01.011. [DOI] [PubMed] [Google Scholar]
  • 5.Brian L. Frye and Chat GPT, Should Using an AI Text Generator to Produce Academic Writing Be Plagiarism?, 33. Fordham Intell Prop Media & Ent LJ. 946 [Google Scholar]

Articles from Missouri Medicine are provided here courtesy of Missouri State Medical Association

RESOURCES