Skip to main content
Indian Dermatology Online Journal logoLink to Indian Dermatology Online Journal
. 2023 Oct 5;15(1):166–168. doi: 10.4103/idoj.idoj_274_23

ChatGPT: The Good, The Bad, and Everything in Between

Naveen Manohar 1, Shruthi S Prasad 1,, Gajanan Pise 1
PMCID: PMC10810371  PMID: 38282996

Ever since ChatGPT (OpenAI, San Francisco, CA, USA) was released in November 2022, physicians, researchers, journalists, lawyers, and teachers across the globe have debated its strengths and weaknesses, role in research, qualification for authorship in academic publications, and medicolegal aspects. ChatGPT is an artificial intelligence (AI) natural language generator that can interact in a human-like manner. It derives its contextual responses based on the large datasets that it has been trained upon. ChatGPT, and any similar tool that may be developed in the future, is highly unlikely to be a fleeting moment of infatuation with new technology that will soon be forgotten. It is real, it is here, and it is time to overcome technological fear by understanding to use it ethically, efficiently, and judiciously.

Learning medicine demands memorizing facts and learning soft skills for successful interactions with patients. Medicine cannot be taught via correspondence; it is founded on human interactions and experience. ChatGPT can help with facts—it can model itself to explain a complex topic to a novice and discuss advanced topics with an expert. ChatGPT has also demonstrated the medical proficiency of a third-year American medical student.[1] Consequently, it can be misused to cheat along this journey by completing assignments without acquiring skills and drafting elaborate manuscripts without understanding them. Students must remember that ChatGPT cannot help with learning such skills; these skills are acquired over the years by interacting with patients and people. Warmth of tone, body language, compassion, and empathy require years of honing. There is no mathematical equation to derive the perfect score on these skills because humans are not perfect, and one size does not fit all, especially in healthcare-related conversations. Medical and sociocultural differences are inherent and attempting to derive a common denominator for all human interactions can result in catastrophic results, such as systemic racial bias.[2] The clinical utility of ChatGPT requires a high index of scrutiny because humans are not mathematical equations. Our bodies, minds, and overall health are complex intertwined systems that only make sense from the right clinical perspective. This perspective includes non-verbal communication and clues. Experience teaches that two people can respond very differently to a given situation; fortunately, physicians get better at distinguishing such variations with time. However, ChatGPT has not spent any such time with people. It has no experiences; it only has factual knowledge. It understands the clues and context of language but not subtext and non-verbal clues. Therefore, algorithms that underly ChatGPT apply perfectly only to other machines, and not humans.

Academically, ChatGPT can not only draft manuscripts, but it can also do it well enough to fool experts. Gao et al. highlighted that approximately one-third of AI-generated abstracts escaped detection by both experts and AI detector.[3] Therefore, the quality of the literature available online may not be up to the standards for evidence-based medicine. Consequently, journals will be required to screen manuscripts for AI-generated content to ensure that they publish authentic works. In research, the principal investigator is accountable for being truthful in their work, which is the foundation of ethical research and publications. While ChatGPT has demonstrated the capacity to conceive ideas for systematic reviews with an accuracy rate of approximately 65%, there is no accountability.[4] In literature search, ChatGPT can be very convincing in its presentation of articles with appropriate citations; however, a majority of these references are non-existent.[5] The overconfident presentation of such machines is termed “artificial hallucination”; therefore, ChatGPT should not be used for literature search. Journals cannot verify every citation in all manuscripts; using ChatGPT will result in subpar academic quality and dissemination of factually incorrect medical information as well as damage to the reputation of the authors. One of the debatable topics with ChatGPT is that of authorship. Mimesis in philosophy and literary criticism refers to representation or imitation of the truth. Skillful writing is an art and, hence, a form of representation. However, the content generated by ChatGPT is an imitation of its training dataset. The current publishing standards rigorously discourage plagiarism, which is copy-pasting of already available information. Therefore, ChatGPT and other “imitators” do not qualify for authorship. Authors must take complete responsibility for their manuscripts if ChatGPT is used in any role while drafting manuscripts.

ChatGPT and related tools are good educational tools, maybe even administrative assistants.[6] They can help us improve the efficiency of mundane tasks, which can help increase the time spent helping patients. ChatGPT can assist with discharge summaries and radiological reports, though not without errors.[7] Therefore, even as assistants, human supervision and accountability is required. Another advantage of ChatGPT is that it can help non-English speakers in editing manuscripts for language and coherence. This can help expand the body of evidence by helping non-English-speaking authors in publishing their findings. Additionally, ChatGPT can be used to translate medical literature from English to other languages, thus narrowing the divide in digital health information between countries.

Improved efficiency, not replacement of humans, is the next step in evolution. We do not need to compete with chatbots. We do, however, need to combat misinformation and the role of such chatbots in spreading misinformation. The COVID-19 pandemic has highlighted how easily misinformation can spread to people’s homes. ChatGPT is ultimately controlled by a private entity and players who created its algorithms. An AI model trained with incorrect or unverified information can result in wrong information becoming the loudest voice online, which can result in poor health choices. Its novelty and entertaining manner may lure patients into diagnosing themselves or worse, treating themselves and others, which can have catastrophic consequences. Therefore, physicians need to spread awareness regarding the use of such tools for healthcare choices. Appropriate guidelines for publishing medical and healthcare-related knowledge can help streamline the use of such tools before they become the next technological giant that meddles with a society using misinformation [Table 1].

Table 1.

Summarizing the utility of ChatGPT in its current form

The Good
 - Improve medical education by simplifying complex ideas for different levels and simulate cases for an interactive and immersive learning experience
 - Better patient information: simplify illustrations and medical language
 - Publishing: can help with editing language and summarizing valid articles
 - Administrative assistant: scheduling, generating reports and clinical summaries
Everything in between
 - Text-based: cannot understand photographs, videos, examination findings
 - Over-reliance: may negatively affect critical thinking and problem-solving abilities
 - Legal and ethical considerations required to define its roles
The Bad
 - Limited context understanding → misinformation or incomplete answers
 - Not real-time information: based on its training data’s cutoff date
 - Potential bias: arises from the training data and instructions

Tools, such as ChatGPT, can play important roles for a better tomorrow, not unsupervised critical roles, but important enough roles. This century’s technological advances can be used to improve medical education, training, and healthcare services by empowering patients with medical information, especially in India with its rising digital awareness. Mobile phones, which were once considered a luxury have become a necessity today and play major roles in medical education and patient care. Similarly, ChatGPT and its successors will also become integrated in our daily lives. Currently, ChatGPT is in its infancy and should be shaped using checkpoints for a brighter scientific and academic future. Every week, newer technologies derived from such models are being introduced with additional features, such as statistical analysis based on text, videos, and images. These technologies can improve the pace of scientific advancements. The future of ChatGPT and similar technologies is closely related to the future of medical training, practice, and publications. Its role in formulating medical content, academic texts, illustrations, tests, and research is almost inevitable; however, physicians and researchers need to collectively guide its appropriate use.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

References

  • 1.Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, et al. How does ChatGPT perform on the United States medical licensing examination?The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. doi: 10.2196/45312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Madhusoodanan J. Is a racially-biased algorithm delaying health care for one million black people? Nature. 2020;588:546–7. doi: 10.1038/d41586-020-03419-6. [DOI] [PubMed] [Google Scholar]
  • 3.Gao CA, Howard FM, Markov NS, Dyer EC, Ramesh S, Luo Y, et al. Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. NPJ Digit Med. 2023;6:75. doi: 10.1038/s41746-023-00819-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Gupta R, Pande P, Herzog I, Weisberger J, Chao J, Chaiyasate K, et al. Application of ChatGPT in cosmetic plastic surgery: Ally or antagonist. Aesthet Surg J. 2023:sjad042. doi: 10.1093/asj/sjad042. [DOI] [PubMed] [Google Scholar]
  • 5.Manohar N, Prasad SS. Use of ChatGPT in academic publishing: A rare case of seronegative systemic lupus erythematosus in a patient with HIV infection. Cureus. 2023;15:e34616. doi: 10.7759/cureus.34616. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.DiGiorgio AM, Ehrenfeld JM. Artificial intelligence in medicine and ChatGPT: De-tether the physician. J Med Syst. 2023;47:32. doi: 10.1007/s10916-023-01926-3. [DOI] [PubMed] [Google Scholar]
  • 7.The Lancet Digital Health. ChatGPT: Friend or foe? Lancet Digit Health. 2023;5:e102. doi: 10.1016/S2589-7500(23)00023-7. [DOI] [PubMed] [Google Scholar]

Articles from Indian Dermatology Online Journal are provided here courtesy of Wolters Kluwer -- Medknow Publications

RESOURCES