Skip to main content
Journal of Education and Health Promotion logoLink to Journal of Education and Health Promotion
. 2025 Jul 31;14:304. doi: 10.4103/jehp.jehp_1699_24

Knowledge, awareness and attitude of healthcare professionals toward the use of ChatGpt: A cross-sectional study

Henston DSouza 1, Azhar Mohammed 1,, Ahmed A Almuntashiri 2, V V Harish Kumar 3, Munaz Mulla 4, Anas A Khader 5, Sanchari Bhowmick 6, Pushkar Gupta 7
PMCID: PMC12413103  PMID: 40917954

Abstract

BACKGROUND:

ChatGPT is increasingly finding its applicability in various aspects of life, including the healthcare system. However, its use is not so much prevalent as of now among healthcare professionals. The present study aimed to assess the knowledge, awareness, and attitude of healthcare professionals toward the use of ChatGPT.

MATERIALS AND METHODS:

This questionnaire-based cross-sectional study was conducted using a pre-validated itemed survey form. This pre-validated itemed survey form was circulated among 300 healthcare professionals. The questionnaire items were pertaining to the study respondents’ knowledge, its use, and opinions with regard to ChatGPT influence over healthcare profession. SPSS statistical software (version 16.0) was used for analyzing observations. Normality was tested using the Shapiro–Wilk test. Continuous type of data was observed as mean ± standard deviation (SD) or median (interquartile range). Categorical data was entered as frequencies and/or percentage. Correlations between degrees of agreement and demographic characteristics were done by two-tailed independent“t-test” with significance set at P ≤ 0.05.

RESULTS:

Statistically significant level of knowledge (P = .0387) regarding the use of ChatGPT was found in healthcare workers. Eighty-eight percent healthcare professionals opinioned that ChatGPT can never replace the conventional style of writing or report generation in healthcare that was found to be statistically significant (P = 0387).

CONCLUSION:

According to this study, healthcare professionals have little knowledge regarding ChatGPT and have no acceptance for the use of this technology in the healthcare field.

Keywords: Artificial intelligence, attitude, awareness, health, knowledge

Introduction

ChatGPT, a chatbot was introduced to the world in 2022.[1] It can be defined as a conversational artificial intelligence-based language related model that was developed by OpenAI.[2] ChatGPT makes use of deep learning-based techniques for generating responses related to human using language-based inputs.[1] This model can be trained using large text dataset with the ability for understanding as well as generating texts on a wide topic range.[1] ChatGPT has now been used in different applications like customer-based service, creation of content, and translating languages. It is routinely used for generating text contexts, presentations, and source codes on different topics.[1,2] ChatGPT is a natural processing system associated with language and is a by-product of artificial intelligence that is now been increasingly used.[2,3]

GPT-3is a 175 billion parameter-based encoder model that has been trained based upon diversified database comprised of 500 billion tokens.[4] Data is routinely outsourced from texts obtained from different non-specified websites, books, and Wikipedia. This model has the ability for learning shots that allow it to perform variety of tasks using minimum examples or use of guidance. It is instruction-based extension that was designed for interacting and responding in manner of conversation.[5,6]

GPT-4, an upgraded version of ChatGPT, was released in 2023. It processes texts as well as images and has shown significant outperformance than existing language learning models that include ChatGPT.[7] GPT-4-based literature in healthcare showed a similar pattern with that of ChatGPT consisting of short comments, reviews, and evaluations.[8] GPT-4 has been shown to perform better or at par with the state-of-the-art radiology-based models over different radiology-based tasks (for example, summarization, classification of disease, extraction of entity from the radiological report).[9] GPT-4 has been shown to pass USMLE with good distinction which was contrasted to “pass performance” of “ChatGPT” in a similar USMLE exam.[10] GPT-4 has been shown to outperform Med-PaLM, an application that specifically works on medical knowledge. GPT-4 has been shown to have certain disadvantages as physicians with no training in radiology readily adopt interpretations of reports that have been generated by GPT-4 which might be false in nature. Better coherence to language is a reason for opting for artificial intelligence-generated impressions rather than reports which have written by trained radiologists.[11] However, this software has failed to report anatomical landmarks or pathological conditions from radiographical images.[12]

Abdelhafiz AS et al. in 2024 observed that 67% of researchers possessed knowledge regarding ChatGPT; however, just 11.5% were using the technology for research purpose. Mainly, ChatGPT was being used for rephrasing of paragraphs and locating references. More than 1/3rd of researchers listed ChatGPT as one of the contributing authors in scientific papers.[13]

Another study conducted among healthcare professionals reported that 18.4% made use of ChatGPT tool, whereas most nonusers showed interest in using artificial intelligence-based chatbots.[14,15] Temsah et al. in 2023 observed that 75.1% healthcare members had no issues with incorporating ChatGPT in day-to-day practice.[15]

Hence, after analyzing the growing influence of ChatGPT and similar other applications in healthcare, this study was conducted to assess the knowledge, awareness, and attitude of healthcare professionals toward the use of ChatGPT.

Materials and Methods

Study design and setting

This cross-sectional study was conducted by mailing the survey of questionnaire predesigned on Google forms that assessed the knowledge, awareness, and attitude of healthcare personnels toward the use of ChatGpt in day-to-day clinical practice. Informed approval was taken from all the individuals after that the survey forms were mailed to them.

Inclusion criteria included i) only healthcare professionals actively practicing; ii) those practitioners who were comfortable using a smartphone, iii) those practitioners who gave consent to the study, iv) completely filled questionnaire forms, and v) individuals who had knowledge regarding chatbots or artificial intelligence. Exclusion criteria included i) non-healthcare-associated professionals, ii) professionals who were not adapted toward internet use, iii) those who did not consent for participating in the study, and iv) those who did not fill the questionnaire forms completely.

Study participants and sampling

The studied population comprised of 180 healthcare professionals belonging to medical, dental, nursing, and physiotherapy streams. The study was conducted pan-India across various educational institutions spread across various locations. A total of 180 individuals returned the survey forms filled completely. Twentyindividuals were excluded as they did not respond or had sent incomplete queries. One-Epi software was used for calculating the size of sample for conducting the study. The minimum size of sample calculated was 300 at 95% confidence level with 7% error margin with attrition rate of 5%.[5] The attrition rate on sample size was determined using the closed form approximate sample size formulas. Apreliminary study was done for assessing the validity as well as reliability of the given questionnaire.

Data collection tool and technique

The pre-validated questionnaire was in survey format and comprised of nineitems. It used a format based on multiple-choice questions. Questionnaire items were pertaining to study respondents’ knowledge, its use, and opinions with regard to ChatGPT’s influence over the healthcare profession. Responses were recorded as either Yes or No or in multiple choice question type. The Google form was designed for the compulsory answering of questions, thuseliminatingany possibility of missing information and no responsive errors. The components of the questionnaire were divided into knowledge, awareness, and attitude concerning the use of ChatGPT as an adjunctive tool in the health care of patients. Scoring for “yes” was coded as “1,” whereas scoring for “no” was “2.”“High score” was considered as > 60% affirmative scoring, while a lesser percentage (<60%) was considered as “lower score.”

Demographic data tool

graphic file with name JEHP-14-304-g001.jpg

This formula was used for determining the demographics of the study pertaining to the data related to participating subjects. The various aspects that were covered included occupation, degree of education, active internet, and smartphone user.

After corrections, pretesting of questionnaire was done and the data collected wasthen used for assessing internal consistency reliability by employing Cronbach’s alpha test. The obtained results demonstrated internal consistency reliability with Cronbach’s α score = 0.81).

Statistical analysis

Linear regression models were used for assessingtheeffectiveness of ChatGPT in the identification of any association between the study variables. A t-test was employed for co-linearity analysis.

This was done by using SPSS statistical software (version 16.0) for Windows (USA). The normality of the collected was tested by use of the Shapiro–Wilk test. Continuous data wasnoted as mean ± standard deviation (SD) or median (interquartile range), whereas categorical data waswritten in the form of frequencies and/or percentage. Correlations between degrees of agreement and demographic characteristics were performed by employing a two-tailed independent “t-test” with significance set at P ≤ 0.05.

Ethical consideration

Institutional ethical clearance was obtained from the concerned research committee and review board (MMC67-23/IEC23). As per the Helsinki declaration, informed consent was obtained from the study participants and the objectives of this study wereexplained to the participants.

Results

On application of linear regression model, consistent findings were reported with significantcorrelation between the studied parameters. The median age calculated of the study participants was found to be 35 years with age range from 29 to 42 years. On characterizing gender distribution, it was found that 57% of study participants were females while 43% were males. This difference was found to have no statistical significance (P = 0.293). Fifty-six percent were medical professionals, 23% were dental practitioners, 11% were nursing professionals, and 20% were practicing physiotherapists. This was found to have statistical significance (P = 0.412) Sixty-nine percent were undergraduate degree holders, while the remaining 31% were postgraduate doctors or practitioners. This was found to have statistically significant difference (P = 0.032) [Table 1].

Table 1.

Table exhibiting demographic characteristics of the studied population

Demographic characteristics Percentages P
Gender distribution
  Males
  Females
43
57
0.293
Distribution of healthcare professionals:
  a) Medical professionals 56
  b) Dentists
  c) Nursing professionals:
  d) Physiotherapists:
23
11
20
0.412
Level of education:
  a) Postgraduate doctors 31
  b) Undergraduate doctors 69 0.032

On the assessment of knowledge regarding the use of ChatGPT among healthcare professionals, it was observed that 64.6% had knowledge of ChatGPT. 14.1% had previously made use of ChatGPT in customer services, i.e., outside medical services. Seven percent of healthcare professionals had used ChatGPT for writing or reporting purposes, of which 8% had used it for proofreading, 2.1% used this application for searching references, and 1% used it for rephrasing an article [Table 2 and Graph 1]. A statistically significant level of knowledge (P = .0387) regarding the use of ChatGPT was found in healthcare workers.

Table 2.

Demonstrating knowledge among healthcare professionals regarding the use of ChatGPT

Items pertaining to knowledge Percentages of affirmative responses (%) Numbers
Have you got any knowledge regarding ChatGPT 64.6% 116
Have you used ChatGPT or any other chatbots for any other purpose except use in medical services (for example, customer services)? 14.1% 25
Have you used ChatGPT for writing or reporting? 07.0% 114
For which of the following purposes have you used ChatGPT for?
  a) Proofreading:
  b) Reference search:
  c) Paragraph/report rephrasing
  d) For rephrasing any article?
08%
02.1%
3.2%
01%
14
04
06

Graph 1.

Graph 1

Graph showing responses (percentages) for items in the questionnaire for the assessment of knowledge regarding the use of ChatGPT among healthcare practitioners

Awareness and attitude: 34% of studied healthcare professionals were found to have awareness concerning ChatGPT use in healthcare. However, 84% were against the use of ChatGPT in the health system and preferred the conventional practice of writing or reporting. Eighty-eight percent were of the opinion that ChatGPT can never replace the conventional style of writing or report generation in healthcare which was found to be statistically significant (P = 0.021) [Table 3 and Graph 2].

Table 3.

Showing responses of awareness and attitude toward the use of ChatGPT among healthcare professionals

Questionnaire items Percentages of positive responses (%) Numbers
Are you aware about the use of ChatGPT in healthcare? 34% 61
Do you recommend the use of ChatGPT in healthcare?
  a) Yes:
  b) No:
16%
84%
29
151
Can ChatGPT take the place of conventional reporting or writing in healthcare?
  a) Yes:
  b) No:
12%
88%
22
158

Graph 2.

Graph 2

Graph showing responses regarding awareness and attitude toward ChatGPT among healthcare-associated individuals

Discussion

The present study aimed to assess the knowledge, awareness, and attitude of healthcare professionals toward the use of ChatGPT. In the present study, statistically significant level of knowledge regarding the use of ChatGPT was found in healthcare workers. However, most of the healthcare workers were against the use of ChatGPT in health system and preferred the conventional practice of writing or reporting. Most of them were of the opinion that ChatGPT can never replace the conventional style of writing or report generation in healthcare.

In contrast to our study findings, Syed et al. reported that 76.6% of respondents felt that ChatGPT could help them professionally.[16] However in support to our findings, Abdelhafiz observed that 11.5% of healthcare service providers made use of ChatGPT for paragraph rephrasing, reference searches, data analytics, and writing academic papers.[13]

Similarly, Temsah et al.[15] in their research conducted on healthcare professionals observed that ChatGPT can be useful in various activities pertaining to health professionals that include diagnosis, providing support to patients as well as their families, evaluation of scientific literature (supported by 48.5%), and as a valuable assistance in research work (65.9% subjects). Additionally, 76.7% healthcare professionals agreed to the beneficial effects of ChatGPT over healthcare systems.

Mheidly[17] explored the ability of ChatGPT to disseminate health information on breast cancer and found it as a promising medium for the dissemination of health information on breast cancer and an important tool for raising awareness and improving public health knowledge on the disease.

Roy et al.[18] assessed the capability of ChatGPT in solving AETCOM (attitude, ethics, and communication) case scenarios used for CBME (competency-based medical education) in India. It was concluded that the current version of ChatGPT showed moderate capability in solving AETCOM case scenarios used for CBME.

Sallam M[19] in their research concluded that the ChatGPT played a significant role in health research by providing a platform for data collection, analysis, and interpretation. ChatGPT could be used to generate an anonymous database by collecting data from patients without mentioning their identity, healthcare providers, and researchers on various health-related topics, such as medication adherence and disease prevalence. This data can be used to identify trends, patterns, and potential risk factors for various health conditions, inform research studies, and improve healthcare outcomes.

However, one disadvantage as well as limitation concerning the use of ChatGPT or any other chatbot application is the potential impact oncognitive abilities.[20]As users increasingly rely on these tools for information retrieval, problem-solving, and even communication, there is the possibility of reducing the need for certain mental processes, like critical thinking, memory recall, or information synthesis. Other main drawbacks revolving around ChatGPT use in health-associated academics and research work included ethical and moral values, intellectual property rights, absence of innovation, erroneous work, less use of available knowledge bank, and errors in citations.[21] The present study is significantly different from similar studies as it has encompassed all aspects of healthcare practitioners and is broadly indicative of the impact of ChatGPT on healthcare in the future.

Any chatbot is computerized software that helps in stimulating and analyzing human spoken and/or written communications. This is possible following training using large libraries.[22] Using these computerized applications, it is possible for individuals to have a conversation with any machine similar to the way they have conversations with other individuals.[22] Chatbots are the conversation-based tools that have been created for helping individuals in various tasks. Examples of voice-based chatbots include Google Assistant, Alexa, and Siri.[22]

ChatGPT along with its counterparts are trained and designed to follow human voice directions and in turn provide responsive assistance. ChatGPT software models have been utilized for generating phrases, paragraphs, and complete papers in a coherent as well as having consistency along with any human language form. The basic strength of ChatGPT models is the capacity for prior training from large volumes of textual data and fine-tuning based on particular tasks such as categorization of texts or answering of questions.[22]

Limitation

The major limitation of the study included the lesser sample size under each professional group study that can be addressed by increasing the sample size of the study. The strength of the present study is the enhanced knowledge level regarding the use of ChatGPT in studied healthcare professionals.

Conclusion

ChatGPT and other chatbot applications are being increasingly used in various aspects of life including healthcare services as well as academic and research work. ChatGPT, a language-based model, has demonstrated various applications in the field of medicine and science such as identification of research area, assistance in diagnosing clinically, and laboratory investigation results. ChatGPT provides updates over novel developments in the field of healthcare. Thougha significant correlation between studied parameters for the assessment of knowledge among healthcare professionals was found, an unwillingness to incorporate the technology in day-to-day practice was observed. A few more questionnaires on the related subject must be included to assess the knowledge, attitude, and perception of the study groupstoward ChatGPT.

All healthcare organizations should lay down guidelines to identify Chatbot applications in patient reporting and investigations so as to enable any deficiencies that can be overlooked by any professional who can get easily impressed by the presentation and confirm using factual humanized findings.

Conflicts of interest

There are no conflicts of interest.

Acknowledgment

The authors would like to thank all the study participants who agreed to participate in the study.

Funding Statement

Nil.

References

  • 1.Li J, Dada A, Puladi B, Kleesiek J, Egger J. ChatGPT in healthcare: A taxonomy and systematic review. Comput Methods Programs Biomed. 2024;245:108013–25. doi: 10.1016/j.cmpb.2024.108013. [DOI] [PubMed] [Google Scholar]
  • 2.Alberts IL, Mercolli L, Pyka T, Prenosil G, Shi K, Rominger A, et al. Large language models (LLM) and ChatGPT: What will the impact on nuclear medicine be? Eur J Nuclear Med Mol Imaging. 2023;23:1–4. doi: 10.1007/s00259-023-06172-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Baker A, Perov Y, Middleton K, Baxter J, Mullarkey D, Sangar D, et al. A comparison of artificial intelligence and human doctors for the purpose of triage and diagnosis. Front Artif Intell. 2020;3:543405. doi: 10.3389/frai.2020.543405. doi: 10.3389/frai.2020.543405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Sajan B. Patel Kyle Lam. ChatGPT: The future of discharge summaries? Lancet Digit Health. 2023;5:e107–8. doi: 10.1016/S2589-7500(23)00021-3. [DOI] [PubMed] [Google Scholar]
  • 5.Lecler A, Duron L, Soyer P. Revolutionizing radiology with GPT-based models: Current applications, future possibilities and limitations of ChatGPT. Diagn Interv Imaging. 2023;104:269–74. doi: 10.1016/j.diii.2023.02.003. [DOI] [PubMed] [Google Scholar]
  • 6.Luo S, Deng L, Chen Y, Zhou W, Canavese F, Li L. Revolutionizing pediatric orthopedics: GPT–4, a groundbreaking innovation or just a fleeting trend? Int J Surg. 2023;109:3694–7. doi: 10.1097/JS9.0000000000000610. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Aljindan FK, Shawosh MH, Altamimi L, Arif S, Mortada H. Utilization of ChatGPT-4 in plastic and reconstructive surgery: A narrative review. Plast Reconstr Surg Glob Open. 2023;11:e5305–8. doi: 10.1097/GOX.0000000000005305. doi: 10.1097/GOX.0000000000005305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Shea YF, Lee CMY, Ip WCT, Luk DWA, Wong SSW. Use of GPT-4 to analyze medical records of patients with extensive investigations and delayed diagnosis. JAMA Netw Open. 2023;6:e2325000. doi: 10.1001/jamanetworkopen.2023.25000. doi: 10.1001/jamanetworkopen. 2023.25000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Liu S, Wright AP, Patterson BL, Wanderer JP, Turer RW, Nelson SD, et al. Assessing the value of ChatGPT for clinical decision support optimization. medRxiv. 2023 doi: 10.1093/jamia/ocad072. doi: 10.1101/2023.02.21.23286254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Karkra R, Jain R, Shivaswamy RP. Recurrent strokes in a patient with metastatic lung cancer. Cureus. 2023;15:35–9. doi: 10.7759/cureus.34699. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Sun Z, Ong H, Kennedy P, Tang L, Chen S, Elias J, et al. Evaluating GPT-4 on impressions generation in radiology reports. Radiology. 2023;307:e231259–65. doi: 10.1148/radiol.231259. doi: 10.1148/radiol. 231259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Brin D, Sorin V, Barash Y, Konen E, Glicksberg BS, Nadkarni GN, et al. Assessing GPT-4 multimodal performance in radiological image analysis. medRxiv. 2023;2:2023–34. doi: 10.1007/s00330-024-11035-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Abdelhafiz AS, Ali A, Maaly AM. Knowledge, perceptions and attitude of researchers towards using ChatGPT in research. J Med Syst. 2024;48:26–34. doi: 10.1007/s10916-024-02044-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Dave T, Athaluri SA, Singh S. ChatGPT in medicine: An overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6:1169595–100. doi: 10.3389/frai.2023.1169595. doi: 10.3389/frai.2023.1169595. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Temsah MH, Aljamaan F, Malki KH. ChatGPT and the future of digital health: A study on healthcare workers’ perceptions andexpectations. Healthcare. 2023;13:1812–6. doi: 10.3390/healthcare11131812. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Syed W, Bashatah A, Alharbi K, Bakarman SS, Asiri S, Algahtani N. Awareness and perceptions of ChatGPT among academics and research professionals in Riyadh, Saudi Arabia: Implications for responsible AI use. Med Sci Monit. 2024;30:e944993. doi: 10.12659/MSM.944993. doi: 10.12659/MSM.944993. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Mheidly N. Unleashing the power of AI: Assessing the reliability of ChatGPT in disseminating breast cancer awareness. J Educ Health Promot. 2024;13:172. doi: 10.4103/jehp.jehp_1033_23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Roy AD, Das D, Mondal H. Efficacy of ChatGPT in solving attitude, ethics, and communication case scenario used for competency-based medical education in India: A case study. J Educ Health Promot. 2024;13:22. doi: 10.4103/jehp.jehp_625_23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Sallam M. ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare. 2023;11:887–93. doi: 10.3390/healthcare11060887. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Bodani N, Lal A, Maqsood A. Knowledge, attitude, and practices of general population toward utilizing ChatGPT: A cross-sectional study. SAGE Open. 2023;13:21582440231211079–87. doi: 10.1177/21582440231211079. [Google Scholar]
  • 21.Tustumi F, Andreollo NA, Aguilar-Nascimento JE. Future of the language models in healthcare: The role of ChatGPT. Arq Bras Cir Dig. 2023;36:e1727. doi: 10.1590/0102-672020230002e1727. doi: 10.1590/0102-672020230002e1727. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ali MJ, Djalilian A. Readership awareness series – paper 4: Chatbots and ChatGPT-ethical considerations in scientific publications. Semin Ophthalmol. 2023;38:403–4. doi: 10.1080/08820538.2023.2193444. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Education and Health Promotion are provided here courtesy of Wolters Kluwer -- Medknow Publications

RESOURCES