Skip to main content
International Brazilian Journal of Urology : Official Journal of the Brazilian Society of Urology logoLink to International Brazilian Journal of Urology : Official Journal of the Brazilian Society of Urology
letter
. 2023 Jun 20;49(5):652–656. doi: 10.1590/S1677-5538.IBJU.2023.0112

ChatGPT for medical applications and urological science

Leonardo O Reis 1,2,
PMCID: PMC10482461  PMID: 37338818

To the editor,

A Generative Pre-Trained Transformer (GPT) is an artificial intelligence (AI) algorithm designed to understand and generate human-like language. ChatGPT is a free and publicly available large language model developed by OpenAI. It uses advanced natural language processing algorithms, is trained on a vast corpus of data, and is not free of limitations and biases. AI is in its infancy and there is enough potential to be developed that will undoubtedly change our practices and human life. As an impactful tool, it can be used for the good and for the bad, and its responsible and moderated use is critical.

What is chatGPT

A Generative Pre-Trained Transformer (GPT) is an artificial intelligence (AI) algorithm designed to understand and generate human-like language. ChatGPT is a free and publicly available large language model developed by OpenAI. It uses advanced natural language processing algorithms, trained on a vast corpus of data (1).

Training data includes a wide range of questions and prompts and a diverse set of texts, such as books, articles, and websites, allowing knowledge in many different domains. As an AI language model, its goal is to assist and provide useful responses to users who interact (2).

A PubMed search on March, 03, 2023 using the word “chatGPT” displayed 5 publications in 2022 and 72 in the first 2 months of the year of 2023 and another search on March, 09, 2023 retrieved additional 22 new documents. The studies range from literature reviews on the issue to clinical case reports that used chatGPT as a writing tool.

ChatGPT formedical applications

ChatGPT has several potential applications in the field of medicine and healthcare. As an AI language model, ChatGPT has been trained on a vast amount of medical text data, including research papers, clinical reports, and electronic health records. This means that it has a deep knowledge base on medical topics and can generate text that may be instructive (3).

One potential application of ChatGPT in medicine is clinical decision support. By inputting patient data and symptoms, ChatGPT can generate recommendations for diagnosis and treatment based on the latest medical research and clinical guidelines. As any tool, when complementary to human expertise it can help improve the accuracy and efficiency of medical diagnosis and treatment, with potential to lead to better patient outcomes (4).

Another application of ChatGPT in medicine is natural language processing of electronic health records (EHRs). EHRs contain a vast amount of unstructured text data, and extracting meaningful information from these records can be time-consuming and challenging. ChatGPT can help automate this process by analyzing EHRs and identifying key information, such as patient diagnoses, treatments, and outcomes (5).

ChatGPT can also assist with patient education by generating easy-to-understand explanations of medical conditions and treatments. This can be particularly helpful for patients who may have difficulty understanding complex medical terminology or who may feel overwhelmed by the amount of medical information available online (6).

Overall, ChatGPT has several potential applications in medicine and healthcare, and its ability to generate text on a wide range of medical topics makes it a valuable tool for medical professionals, researchers, and patients alike. However, it's important to note that ChatGPT should be used as a complementary tool to human expertise and should not be relied upon as a substitute for professional medical advice or diagnosis.

CHATGPT FOR MEDICAL WRITING

ChatGPT can be a valuable resource for medical writing in several ways. As an AI language model, ChatGPT has been trained on a vast amount of data, including medical literature, research articles, and clinical reports. This means that it has a vast knowledge base on medical topics and can generate text that is precise (7).

One way ChatGPT can help with medical writing is by providing assistance with grammar and syntax. Medical writing often involves complex terminology and jargon, and ChatGPT can help ensure that the language used in a medical document is grammatically correct and easy to understand (8).

ChatGPT can also help with writing medical reports, research papers, and other types of medical documents by providing suggestions for structure, formatting, and organization. For example, it can suggest appropriate headings and subheadings, provide examples of effective introductions and conclusions, and help ensure that the document flows logically and coherently (9).

Furthermore, ChatGPT can help with summarizing complex medical information in a way that is easy to understand for a lay audience. This can be particularly helpful for medical writers who are creating patient education materials or other types of health-related content (10).

Overall, ChatGPT's ability to generate grammatically correct text on a wide range of medical topics can be a valuable resource for medical writers looking to improve the quality and effectiveness of their work.

ChatGPT for urological science

ChatGPT can be a useful resource for uro-logical science in several ways. As an AI language model, ChatGPT has been trained on a vast amount of text data, including scientific research papers, medical textbooks, and other authoritative sources on urological science. This means that it has a deep knowledge base on urological science (11).

One way ChatGPT can help with urological science is by providing assistance with writing research papers and clinical reports. Urological science involves complex medical terminology and jargon, and ChatGPT can help ensure that the language used in a research paper or clinical report is scientifically accurate (12).

ChatGPT can also help urological scientists with analyzing and interpreting data. By inputting data from urological studies or clinical trials, ChatGPT can generate text that provides insights into the findings, significance, and implications of the data. This can be particularly helpful for uro-logical scientists who need to communicate their research findings in a clear and concise way (13).

Furthermore, ChatGPT can assist with creating patient education materials on urological topics. Urological conditions can be complex and difficult to understand for patients, and ChatGPT can generate text that explains urological conditions and treatments in a clear and understandable way (14, 15). Overall, ChatGPT can be a valuable resource for urological scientists looking to improve the quality and effectiveness of their work.

ChatGPT and references

ChatGPT is an AI language model that generates text based on patterns and statistical models learned from a large corpus of text data. While it can provide accurate and informative information, it does not have the ability to add exact references or citations to its writing.

However, as an AI language model, ChatGPT is trained on a vast amount of text data, including scientific research papers, academic journals, and other authoritative sources. This means that the information it provides is typically based on reliable and credible sources.

It's important to note that it is ultimately up to the user to ensure that the information is properly cited and referenced in any written work. If you are using ChatGPT to generate content for a research paper, article, or other type of written work, it's important to carefully review and fact-check the information it provides and to include proper citations and references to any sources used.

In summary, ChatGPT is a powerful tool that can provide information on a wide range of topics, but it is the user's responsibility to ensure that the information is properly cited and referenced in any written work.

Ethics in the use of chatGPT

As an AI language model, ChatGPT has the potential to be a powerful tool for many different applications. However, there are important ethical considerations that must be taken into account in the use of ChatGPT (16).

First and foremost, the data that is used to train ChatGPT is critical. To ensure that ChatGPT is ethical and unbiased, the data that is used to train the model must be carefully selected and scrutinized. Training data that includes biased or discriminatory language can result in a model that perpetuates those biases and reinforces harmful stereotypes (17).

Another important ethical consideration is the potential misuse of ChatGPT. While ChatGPT can be used to provide helpful information and support to users, it can also be used to spread misinformation, hate speech, and other harmful content. It is important for developers and users of ChatGPT to be aware of this potential and take steps to mitigate it, such as implementing content moderation tools and educating users on responsible use.

There are also privacy and security concerns associated with the use of ChatGPT. Conversations with ChatGPT can contain sensitive personal information, and it is important for developers to take steps to protect this data from unauthorized access or use. Overall, ethics in the use of ChatGPT requires careful consideration of the data used to train the model, responsible use of the model to prevent harmful content, and safeguarding user privacy and security (18).

Researchers using AI tools should document this use in the methods or acknowledgements sections of the created manuscript, and AI tools cannot take any attribution of authorship, since it carries with responsibility and accountability for the work (19, 20).

AI is in its infancy, is not free of limitations and biases, and there is enough potential to be developed that will certainly change our practices and human life. As any impactful tool, it can be used for the good and for the bad, and its responsible and moderated use is key.

ChatGPT limitations

Researchers are actively working to improve the capabilities and to address limitations that are common to most language models and AI systems in general.

While it is continually being updated with new data and fine-tuned by researchers and developers to improve its performance, the specific level of updating and the frequency of updates can vary depending on the resources and priorities of the team or organization responsible for maintaining ChatGPT. The current version was trained on a dataset that had a knowledge cutoff date of September 2021.

In addition to biases based on the sources of the data or the people who wrote it or programmers’ orientation, algorithmic bias may create systematic and repeatable errors with “unfair” outcomes in ways different from the intended algorithm.

Artificial intelligence hallucination may generate plausible sounding but incorrect or nonsensical answers that does not seem to be justified by its training data, such as claim to be human.

Limited access to the human commonsense knowledge, and inability to reason and learn and create like humans may generate struggle to understand the full context of a conversation or a piece of text, providing illogical, out of context, not relevant or not accurate responses (21).

REFERENCES

  • 1.GPT-4 is OpenAI's most advanced system, producing safer and more useful OpenAI. [Internet] [accessed on 06 mar 2023]. Avaliable at. < https://openai.com>.
  • 2.Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, et al. Language models are few-shot learners. Adv Neural Inf Process Syst. 2020;33:1877–1901. [Google Scholar]
  • 3.Laranjo L, Dunn AG, Tong HL, Kocaballi AB, Chen J, Bashir R, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. 2018;25:1248–1258. doi: 10.1093/jamia/ocy072. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Xu L, Sanders L, Li K, Chow JCL. Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic Review. JMIR Cancer. 2021;7:e27850. doi: 10.2196/27850. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Yang LWY, Ng WY, Lei X, Tan SCY, Wang Z, Yan M, et al. Development and testing of a multi-lingual Natural Language Processing-based deep learning system in 10 languages for COVID-19 pandemic crisis: A multi-center study. Front Public Health. 2023;11:1063466–1063466. doi: 10.3389/fpubh.2023.1063466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Almalki M, Azeez F. Health Chatbots for Fighting COVID-19: a Scoping Review. Acta Inform Med. 2020;28:241–247. doi: 10.5455/aim.2020.28.241-247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Coiera E, Liu S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in health-care. Cell Rep Med. 2022;3:100860–100860. doi: 10.1016/j.xcrm.2022.100860. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.van Buchem MM, Boosman H, Bauer MP, Kant IMJ, Cammel SA, Steyerberg EW. The digital scribe in clinical practice: a scoping review and research agenda. NPJ Digit Med. 2021;4:57–57. doi: 10.1038/s41746-021-00432-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Perlis N, Finelli A, Lovas M, Berlin A, Papadakos J, Ghai S, et al. Creating patient-centered radiology reports to empower patients undergoing prostate magnetic resonance imaging. Can Urol Assoc J. 2021;15:108–113. doi: 10.5489/cuaj.6585. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Hasan MM, Islam MU, Sadeq MJ, Fung WK, Uddin J. Review on the Evaluation and Development of Artificial Intelligence for COVID-19 Containment. Sensors (Basel) 2023;23:527–527. doi: 10.3390/s23010527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Chen J, Remulla D, Nguyen JH, Dua A, Liu Y, Dasgupta P, et al. Current status of artificial intelligence applications in urology and their potential to influence clinical practice. BJU Int. 2019;124:567–577. doi: 10.1111/bju.14852. Erratum in: BJU Int. 2020;126:647. [DOI] [PubMed] [Google Scholar]
  • 12.Eun SJ, Kim J, Kim KH. Applications of artificial intelligence in urological setting: a hopeful path to improved care. J Exerc Rehabil. 2021;17:308–312. doi: 10.12965/jer.2142596.298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Chu TN, Wong EY, Ma R, Yang CH, Dalieh IS, Hung AJ. Exploring the Use of Artificial Intelligence in the Management of Prostate Cancer. Curr Urol Rep. 2023 doi: 10.1007/s11934-023-01149-6. Epub ahead of print. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Thenault R, Kaulanjan K, Darde T, Rioux-Leclercq N, Bensalah K, Mermier M, et al. The Application of Artificial Intelligence in Prostate Cancer Management—What Improvements Can Be Expected? A Systematic Review. Applied Sciences. 2020;10:6428–6428. [Google Scholar]
  • 15.Gabrielson AT, Odisho AY, Canes D. Harnessing Generative Artificial Intelligence to Improve Efficiency Among Urologists: Welcome ChatGPT. J Urol. 2023 doi: 10.1097/JU.0000000000003383. Epub ahead of print. [DOI] [PubMed] [Google Scholar]
  • 16.Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: A mapping review. Soc Sci Med. 2020;260:113172–113172. doi: 10.1016/j.socscimed.2020.113172. [DOI] [PubMed] [Google Scholar]
  • 17.Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1:389–399. [Internet]. Available at. < https://www.nature.com/articles/s42256-019-0088-2>. [Google Scholar]
  • 18.Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health. 2023;5:e105–e106. doi: 10.1016/S2589-7500(23)00019-5. [DOI] [PubMed] [Google Scholar]
  • 19.Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613:620–621. doi: 10.1038/d41586-023-00107-z. [DOI] [PubMed] [Google Scholar]
  • 20.[No Authors] Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613:612–612. doi: 10.1038/d41586-023-00191-1. [DOI] [PubMed] [Google Scholar]
  • 21.Obermeyer Z, Topol EJ. Artificial intelligence, bias, and patients’ perspectives. Lancet. 2021;397:2038–2038. doi: 10.1016/S0140-6736(21)01152-1. [DOI] [PubMed] [Google Scholar]

Articles from International Brazilian Journal of Urology : Official Journal of the Brazilian Society of Urology are provided here courtesy of Brazilian Society of Urology

RESOURCES