1.
The Chat Generative Pre‐trained Transformer (ChatGPT) is an artificial intelligence (AI) model developed by Open AI for generating human‐like text. ChatGPT is powered by the large language model (LLM) GPT‐3.5. Like other LLM‐based AIs, it has been trained on large datasets of text and can generate new text similar to the text it was trained on, which requires generating, understanding and interpreting human language using computer systems. 1 , 2 , 3 ChatGPT simulates human interaction, and it is probably the most powerful language model currently available. It uses a deep learning technique called a ‘transformer architecture’ that is trained on massive text datasets obtained from the internet. 4
ChatGPT is leading a revolution in AI technology, and its impact on the field of clinical medicine is yet to be determined. Some clinical practices may rely on data analysis, clinical research and guidelines. AI models may help in clinical decision support, clinical trial recruitment, clinical data management, research support, patient education and other fields. 5 , 6 In some cases, AI models can automate certain tasks performed by humans, such as data analysis, image acquisition and interpretation. 7 , 8 This may increase the efficiency and reduce the workload of healthcare professionals, allowing them to focus on higher level tasks that require their expertise and clinical judgement. On the other hand, the results must originate from the extraordinary human mind and its ability to process information and communicate through them. 9 Therefore, it is important to use AI models such as ChatGPT as a tool to support, rather than replace, healthcare professionals in their decision‐making process.
Similarly, AI technology including ChatGPT also has great potential in assisting basic research and accelerating the technological transformation of clinical and translational medicine. For example, in terms of drug discovery, AI has image recognition capabilities, which can identify, classify and describe chemical formulas or molecular structures to assist the design of new structures and functional group combinations of compounds. 10 In addition, AI technologies play an important role in disease prediction, diagnosis and assessment of therapeutic targets, such as providing treatment guidelines for cancer patients based on their magnetic resonance imaging radiomics and predicting ageing‐related diseases. 11 , 12 , 13 However, unlike the AI algorithm or model specially developed for drug discovery and disease diagnosis, the core value and advantage of ChatGPT lies in its powerful LLM. At present, ChatGPT cannot update the training data in a real‐time manner. In addition, it can only give general and vague answers in some existing medical‐related conversations. 14 Some experts present conversations with ChatGPT online for case studies and they find that the diagnoses made by ChatGPT are often not comprehensive and adequate. For example, for patients with common symptoms such as fever, ChatGPT will give suggestions to take antipyretics to help relieve symptoms, but cannot accurately judge infection, pimples or other causes. Therefore, blindly relying on the diagnosis and guidance of ChatGPT may have the potential risk of inaccurate diagnosis or delayed treatment. Furthermore, Fijačko et al. pointed out that ChatGPT without specific courses or training in medical knowledge could not pass the life‐support exam, suggesting that ChatGPT may not be capable enough for life support in clinical applications. 15 This evidence implies that ChatGPT may not be capable of independently handling the complex work of clinical practice in the current version.
Therefore, focusing on the application of ChatGPT in human–computer interaction may improve its usability in clinical practice. For example, the diagnosis and treatment of mental illness rely heavily on doctor–patient questionnaires, interviews and judgement. However, many interfering factors such as the physician's tone of voice, mood and the surrounding environment can hinder an accurate assessment of the disease. In this field, AI has been used to record and analyze data related to questionnaires. 16 The emergence of ChatGPT may accelerate the integration of flexible questionnaires, documentation, diagnosis and follow‐up of patients with mental disorders using chatbots. In addition, ChatGPT provides a basis for more flexible and efficient epidemiological research. Indeed, epidemiological research also relies on efficient and reliable data collection, recording and analysis. 17 ChatGPT not only solves the difficulty of remote inquiries, but also helps to reduce labour requirements to complete the work. Besides, ChatGPT is usually more accurate and faster than manual statistics and records.
ChatGPT has caused some changes and controversies in a short period of time in terms of medical education, training and writing. 18 , 19 It has been used to finish writing an abstract, introduction or even the main text of an assignment or article, but it may not do well in writing the creative part of medical writing. 20 ChatGPT often summarizes previous research and data to form abstracts or background knowledge. However, some issues, including ethical concerns, need further clarification. 21 Some scientific journals noted that articles containing ChatGPT‐generated content need to be explicitly listed as authors, while Nature refused to accept ChatGPT as an author on articles because it cannot be responsible for content generated by itself. 22
Furthermore, AI models including ChatGPT can clearly help the healthcare industry by providing a more objective and evidence‐based approach to decision‐making, reducing the risk of human error based on its unparalleled speed of information processing. It also helps identify patterns and correlations in vast amounts of data, which may provide new insights and discoveries in medicine. Additionally, AI models can assist in the detection of disease and the prediction of prognosis, 23 which may help provide more personalized treatment recommendations and improve patient outcomes. 24 Further research and development are needed to ensure the effective and responsible use of AI models in the healthcare industry. Nevertheless, we should be aware that ChatGPT is a double‐edged sword, with both powerful features and potential shortcomings. 3 AI has the potential to significantly impact clinical and translational medicine by improving data analysis, 7 streamlining workflow 25 and enhancing decision‐making. 26 However, potential negative impacts such as privacy concerns, 27 bias, discrimination 28 and so forth should not be underestimated.
Overall, ChatGPT has the potential to revolutionize clinical and translational medicine, but we need to develop appropriate strategies to mitigate potential risks and negative outcomes. Despite our initial lack of preparation for the game‐changing ChatGPT technology, the development of AI is unstoppable. The best course of action is to embrace it, use its capabilities to improve our lives, and foster mutually beneficial relationships by evolving it in clinical medicine.
Xue VW, Lei P, Cho WC. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med. 2023;13:e1216. 10.1002/ctm2.1216
REFERENCES
- 1. Stokel‐Walker C, Van Noorden R. What ChatGPT and generative AI mean for science. Nature. 2023;614(7947):214‐216. doi: 10.1038/d41586-023-00340-6 [DOI] [PubMed] [Google Scholar]
- 2. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023:613(7945):612. doi: 10.1038/d41586-023-00191-1 [DOI] [PubMed] [Google Scholar]
- 3. Shen Y, Heacock L, Elias J, et al. ChatGPT and other large language models are double‐edged swords. Radiology. 2023:230163. doi: 10.1148/radiol.230163 [DOI] [PubMed] [Google Scholar]
- 4. The Lancet Digital Health . ChatGPT: friend or foe? Lancet Digit Health. 2023:5(3):E102. doi: 10.1016/S2589-7500(23)00023-7 [DOI] [PubMed] [Google Scholar]
- 5. Tan XJ, Cheor WL, Lim LL, Ab Rahman KS, Bakrin IH. Artificial intelligence (AI) in breast imaging: a scientometric umbrella review. Diagnostics (Basel). 2022;12(12):3111. doi: 10.3390/diagnostics12123111 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Jayakumar P, Moore MG, Furlough KA, et al. Comparison of an artificial intelligence‐enabled patient decision aid vs educational material on decision quality, shared decision‐making, patient experience, and functional outcomes in adults with knee osteoarthritis: a randomized clinical trial. JAMA Netw Open. 2021;4(2):e2037107. doi: 10.1001/jamanetworkopen.2020.37107 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Hramov AE, Frolov NS, Maksimenko VA, et al. Artificial neural network detects human uncertainty. Chaos. 2018;28(3):033607. doi: 10.1063/1.5002892 [DOI] [PubMed] [Google Scholar]
- 8. Anastasopoulos C, Yang S, Pradella M, et al. Atri‐U: assisted image analysis in routine cardiovascular magnetic resonance volumetry of the left atrium. J Cardiovasc Magn Reson. 2021;23(1):133. doi: 10.1186/s12968-021-00791-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379(6630):313. doi: 10.1126/science.adg7879 [DOI] [PubMed] [Google Scholar]
- 10. Hurben AK, Erber L. Developing role for artificial intelligence in drug discovery in drug design, development, and safety assessment. Chem Res Toxicol. 2022;35(11):1925‐1928. doi: 10.1021/acs.chemrestox.2c00269 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Horvat N, Veeraraghavan H, Nahas CSR, et al. Combined artificial intelligence and radiologist model for predicting rectal cancer treatment response from magnetic resonance imaging: an external validation study. Abdom Radiol (NY). 2022;47(8):2770‐2782. doi: 10.1007/s00261-022-03572-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Pun FW, Leung GHD, Leung HW, et al. Hallmarks of aging‐based dual‐purpose disease and age‐associated targets predicted using PandaOmics AI‐powered discovery engine. Aging (Albany NY). 2022;14(6):2475‐2506. 10.18632/aging.203960 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Rao A, Kim J, Kamineni M, Pang M, Lie W, Succi MD. Evaluating ChatGPT as an adjunct for radiologic decision‐making. medRxiv. 2023. doi: 10.1101/2023.02.02.23285399 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Cahan P, Treutlein B. A conversation with ChatGPT on the role of computational systems biology in stem cell research. Stem Cell Reports. 2023;18(1):1‐2. doi: 10.1016/j.stemcr.2022.12.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Fijačko N, Gosak L, Štiglic G, Picard CT, John Douma M. Can ChatGPT pass the life support exams without entering the American Heart Association Course? Resuscitation. 2023;185:109732. doi: 10.1016/j.resuscitation.2023.109732 [DOI] [PubMed] [Google Scholar]
- 16. Allen S. Artificial intelligence and the future of psychiatry. IEEE Pulse. 2020;11(3):2‐6. doi: 10.1109/MPULS.2020.2993657 [DOI] [PubMed] [Google Scholar]
- 17. Balter O, Balter KA. Demands on web survey tools for epidemiological research. Eur J Epidemiol. 2005;20(2):137‐139. doi: 10.1007/s10654-004-5099-5 [DOI] [PubMed] [Google Scholar]
- 18. Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population‐level vaccine effectiveness analysis. J Glob Health. 2023;13:01003. doi: 10.7189/jogh.13.01003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Curtis N. To ChatGPT or not to ChatGPT? The impact of artificial intelligence on academic publishing. Pediatr Infect Dis J. 2023. doi: 10.1097/inf.0000000000003852 [DOI] [PubMed] [Google Scholar]
- 20. Kitamura FC. ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology. 2023:230171. doi: 10.1148/radiol.230171 [DOI] [PubMed] [Google Scholar]
- 21. Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health. 2023;5(3):E105‐E106. doi: 10.1016/S2589-7500(23)00019-5 [DOI] [PubMed] [Google Scholar]
- 22. Graham F. Daily briefing: ChatGPT listed as author on research papers. Nature. 2023. doi: 10.1038/d41586-023-00188-w [DOI] [PubMed] [Google Scholar]
- 23. Bhatt P, Liu J, Gong Y, Wang J, Guo Y. Emerging artificial intelligence‐empowered mHealth: scoping review. JMIR Mhealth Uhealth. 2022;10(6):e35053. doi: 10.2196/35053 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image‐based deep learning. Cell. 2018;172(5):1122‐1131. doi: 10.1016/j.cell.2018.02.010 [DOI] [PubMed] [Google Scholar]
- 25. Pieszko K, Hiczkiewicz J, Budzianowski J, et al. Clinical applications of artificial intelligence in cardiology on the verge of the decade. Cardiol J. 2021;28(3):460‐472. doi: 10.5603/CJ.a2020.0093 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Patel V, Khan MN, Shrivastava A, et al. Artificial intelligence applied to gastrointestinal diagnostics: a review. J Pediatr Gastroenterol Nutr. 2020;70(1):4‐11. doi: 10.1097/MPG.0000000000002507 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inform Assoc. 2020;27(3):491‐497. doi: 10.1093/jamia/ocz192 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Giovanola B, Tiribelli S. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine‐learning algorithms. AI Soc. 2022:1‐15. doi: 10.1007/s00146-022-01455-6 [DOI] [PMC free article] [PubMed] [Google Scholar]