Abstract
Introduction
The extent of artificial intelligence (AI) engagement and factors influencing its use among medical and allied health students in low-resource settings are not well documented. We assessed the knowledge and correlates of ChatGPT use among medical, dental, and allied health students in Nigeria.
Methods
We used a cross-sectional mixed-methods study design and self-administered structured questionnaires, followed by in-depth interviews with a sub-sample (n = 20) of students. We employed logistic regression models to generate adjusted odds ratios, and thematic analysis to identify key factors.
Results
Of the 420 respondents, 77.4% (n = 325) demonstrated moderate to good knowledge of ChatGPT. Most respondents (61.9%, n = 260) reported prior ChatGPT use in medical education, motivated mainly by ease of use (75.0%) and efficiency (72.1%). Major concerns included risk of dependency (65.0%), inaccuracy (49.7%), doubts about reliability (49.3%), and ethical issues (41.7%). ChatGPT use was more likely among male students (adjusted odds ratio (aOR) = 1.62, 95% confidence interval (95%CI) 1.13–3.72), older cohorts (≥ 25 years) (aOR = 1.74, 95%CI 1.16–4.50), final-year students (aOR = 2.46, 95%CI 1.12–5.67), those with good knowledge (aOR = 3.27, 95%CI 1.59–7.36), and those with positive attitudes (aOR = 4.29, 95%CI 1.92–8.56). Qualitative themes reinforced concerns about errors, ethics, and infrastructure limitations.
Conclusion
We found moderate knowledge and engagement with ChatGPT among medical and allied health students in Nigeria. Engagement was influenced by gender, age, year of study, knowledge, and attitude. Targeted education and guidelines for responsible AI use will be important in shaping the future of medical and health professional education in similar settings.
Keywords: ChatGPT, Artificial intelligence, Utilization, Factors, Medical education
Introduction
In recent years, remarkable strides have been made in generative artificial intelligence (AI) and natural language processing (NLP) technologies with potential applications across various sectors, including professional healthcare education. Among these advancements is ChatGPT, a prominent example of generative pre-trained transformers (GPTs), an AI-powered conversational chatbot introduced in 2022 [1]. ChatGPT as a representative of this class of AI technologies possesses the ability to engage in real-time conversations and offer valuable information on a broad range of topics, with the potential to revolutionize medical and allied health education [2].
While traditional medical education relies on textbooks, lectures, and clinical experiences, the integration of generative AI technologies, exemplified by ChatGPT, offers a transformative opportunity to enhance healthcare professional training [3]. This shift represents a critical frontier in preparing future healthcare professionals for the complexities of the evolving healthcare environment [4]. ChatGPT’s applications in medical education include generating educational content, facilitating personalized learning, automating assignment scoring, providing research assistance, and offering quick access to medical information. Additionally, it holds potential in clinical management, aiding decision support, documentation, and patient communication [5].
Despite the anticipated benefits of efficiency, cost savings, and time optimization of Generative AI models in healthcare, medical education, and research, it is equally crucial to consider the potential drawbacks [6–8]. Medical and allied health students must possess the capacity to assess the accuracy of AI-generated medical information and generate credible, validated information for themselves, patients, and the public [9]. They also need to remain mindful of ethical responsibilities while applying this rapidly evolving technology to clinical care, medical education, and research [10, 11]. It is also important to note that ChatGPT’s dataset is limited, with the original version trained on data only up to 2021 and the more recent GPT-4 version extending its dataset to 2023, which may impact the recency and comprehensiveness of the information it provides [12].
Globally, medical students show a generally optimistic but cautious attitude toward AI applications like ChatGPT [13]. In well-resourced settings, about 75% of students appreciate ChatGPT for its potential to enhance education and practice by aiding in understanding complex medical concepts, generating educational materials, and supporting research [14]. Despite reported improvements in academic performance and clinical decision-making, concerns persist regarding data security, ethical implications, and maintaining a balance between human expertise and AI assistance [15]. In low- and middle-income countries (LMICs), limited studies indicate enthusiasm among medical and allied health students for integrating ChatGPT into their learning and future practice [2, 16]. For example, approximately 70% of health students in Jordan expressed willingness to leverage its potential for bridging knowledge gaps and facilitating skill development [17]. However, challenges such as infrastructural limitations, affordability, and unequal internet access remain prevalent [2, 18]. It is noteworthy that ethical considerations and responsible AI deployment are as pertinent in LMICs as in industrialized countries [19, 20].
Previous studies identified technology knowledge, access, curriculum integration, perceived usefulness, ease of use, peer influence, academic performance, and ethical considerations as factors influencing students’ engagement with AI tools [21, 22]. However, the influence of these factors on the knowledge and use of generative AI, such as ChatGPT in LMICs, such as Nigeria, remains unclear. Given the rapid evolution of AI technologies in healthcare education and the lack of guidelines for the use of Generative AI, including ChatGPT in most health training institutions, it is important to investigate the extent of engagement and factors shaping students’ ChatGPT use in such settings. The findings could empower educational institutions and policymakers to effectively integrate AI into curricula, maximizing benefits for students’ learning experiences while minimizing potential drawbacks.
This study determined the knowledge, attitudes, and correlates of ChatGPT use among medical, dental, and allied health students at Bayero University, Kano, Nigeria.
Methods
Study Setting and Population
The research was conducted at Bayero University’s College of Health Sciences in October 2023. Students were drawn from the faculties of clinical sciences (medical students), dentistry, and allied health. The total student enrolment in each of the faculties was as follows: 496 in clinical sciences (medical students), 248 in dentistry, and 645 in allied health. While most of the students hail from Kano and neighboring states, there was representation from all Nigerian states, and a smaller contingent of international students. The study population included all clinical medical, dental, and allied health students at Bayero University, Kano. Students from non-clinical faculties were excluded, as were those on sick leave, temporarily absent for any reason, or who withheld consent. A guideline is yet to be developed for the use of generative AI including ChatGPT at the study institution.
Study Design
The study was cross-sectional in design and utilized a mixed-methods approach for data collection. The study began with a self-administered survey questionnaire distributed to students and retrieved immediately after completion. This was followed by in-depth interviews with a subset of survey respondents (n = 20) to provide nuanced insights into survey responses concerning ChatGPT use. The survey questionnaire was distributed in hard copy to consenting students during a lecture session. Participation was entirely voluntary, and no names or identifying information was collected to ensure anonymity. In-depth interview participants were purposively sampled from among the survey respondents to ensure diversity across faculties, gender, year of study, and previous use of ChatGPT.
Sample Size Determination
We determined the sample size using Fisher’s formula [23], assuming a 50% ChatGPT use, 95% confidence level, and 5% precision. The resulting minimum sample size (384) was increased by 10% to account for anticipated non-response and rounded to 450.
Sampling
We employed a two-stage sampling method. In the first stage, students were stratified by facultys into clinical sciences (medical students), dentistry, and allied health. Using the current total number of students in each faculty, sample sizes were allocated proportionate to faculty student population, yielding 161, 80, and 209 students from the faculties of clinical sciences, dentistry, and allied health, respectively. In the second stage, we determined eligibility and employed a systematic sampling process to obtain sampling intervals for each stratum using the student register. The first respondent in each stratum was randomly selected between serial number 1 and the sampling interval. Subsequent respondents were identified by adding the respective sampling interval to the previous respondent’s serial number, continuing until the allocated sample size was achieved. Students whose serial number came up with the sampling process were invited to complete hard copy questionnaires after providing informed consent. We employed a two-stage sampling method to improve the representativeness of the sample by accounting for faculty of study and students’ study year, as these factors could influence the adoption of AI technologies such as ChatGPT.
Data Collection
Data were collected using a pre-tested, mostly close-ended, self-administered questionnaire adapted from previous studies [24–26]. The questionnaire collected information on the socio-demographic characteristics of respondents. The questionnaire also captured information about respondents' awareness of ChatGPT, knowledge and attitude toward ChatGPT, and utilization of the ChatGPT application in their studies and clinical education. We also assessed the factors influencing students’ decision to use or not use the ChatGPT application and the extent to which ethical challenges, including data privacy, transparency, informed consent, and plagiarism, were considered when using ChatGPT.
Data Analysis
Data were analyzed using SPSS software version 22 (IBM Corp., Armonk, NY). Following data cleaning, continuous variables were summarized using means with standard deviation (SD) or median with interquartile range. Categorical data were presented as frequencies and percentages. For the categorization of knowledge, correct responses to knowledge questions were assigned a score of 1, while incorrect or “don’t know” responses were scored as 0. Total knowledge scores were converted to percentages and categorized as good (80–100%), moderate (60–79%), and poor (< 60%) based on Bloom’s cutoff points [27]. Attitudes were evaluated using a 5-point Likert-type scale, with negative statements scored in the reverse direction. Total and mean scores were then computed. Respondents with scores greater than the mean were considered to have positive attitudes, while those scoring at or below the mean were categorized as having negative attitudes. The outcomes (knowledge, attitude, and ever use) were compared between students in the three faculties.
At the bivariate level, Pearson’s chi-square test was employed for comparing frequencies. Fisher’s exact test was used when more than 20% of the cells had expected frequencies of less than 5. Variables with p-values < 0.10 at the bivariate level and those deemed contextually important were included in the multivariate logistic regression model [28]. Adjusted odds ratios (OR) and 95% confidence intervals (CI) were calculated using the stepwise approach. To assess model fitness, the Hosmer–Lemeshow statistic and omnibus tests were conducted, with a Hosmer–Lemeshow chi-squared p-value greater than 0.05 indicating a good fit [29].
Qualitative Data Analysis
We recorded and transcribed qualitative interviews verbatim, followed by a detailed thematic analysis using the “Framework Approach.” Initially, we familiarized ourselves with the data through repeated reading, annotating key ideas, and noting initial impressions. Open coding was then applied systematically to the transcripts. Codes were assigned to specific sections of the text based on patterns and key ideas. We used both a priori codes based on the study objectives and emergent codes from the data. For instance, the codes “AI knowledge gap” and “lack of practical training” were initially generated based on interview responses. Similar codes were grouped into broader categories and themes were developed by identifying recurring patterns across the dataset. These themes were reviewed and validated by revisiting the data to ensure that they accurately reflected participant responses. Themes were applied systematically to all transcripts, and the data were organized in a matrix. Finally, the qualitative findings were triangulated with quantitative results.
Ethical Considerations
The ethical clearance for the study was obtained from the Bayero University Health Research Ethics Committee, reference number BUK/HREC/319. Detailed study information was provided to the students in hard copy format prior to obtaining informed consent. Students were informed that participation was entirely voluntary, and anonymity was assured as no identifying information was collected.
Results
Out of the 450 students invited to participate, 420 (93.3%) completed the questionnaires, with response rates exceeding 93% across all three faculties. About 38.8% (n = 163) of the respondents were female, and the overall mean age (± SD) was 23.3 ± 3.45. The participants were drawn from the faculties of clinical sciences (medical students) (35.7%, n = 150), dentistry (17.9%, n = 75), and allied health sciences (46.4%, n = 195), and spanned across the 1st to 6th years of study (Table 1).
Table 1.
Socio-demographic characteristics of medical and healthcare students, Kano, Nigeria (N = 420)
| Characteristics | Frequency No. (%) |
|---|---|
| Sex | |
| Male | 257 (61.2) |
| Female | 163 (38.8) |
| Age group | |
| < 20 | 76 (18.1) |
| 20–24 | 182 (43.3) |
| ≥ 25 | 162 (38.6) |
| Faculty/course of study | |
| Clinical sciences (medical students) | 150 (35.7) |
| Dentistry | 75 (17.9) |
| Allied health sciences | 195 (46.4) |
| Year of study | |
| Year 1 | 53 (12.6) |
| Year 2 | 78 (18.6) |
| Year 3 | 75 (17.9) |
| Year 4 | 79 (18.8) |
| Year 5 | 87 (20.7) |
| Year 6 | 48 (11.4) |
Awareness and Perceived Capabilities of ChatGPT
All respondents (100.0%, n = 420) reported being aware of ChatGPT, primarily through friends/peers (54.1%) and social media (38.8%). The majority agreed that ChatGPT comprehends and generates text responses in a human-like manner. Further, 63.3% of respondents indicated that ChatGPT could offer medical advice and differential diagnoses, with approximately half (49.3%) stating that it could answer medical-related questions. Respondents also acknowledged ChatGPT’s role in literature review (60.5%), drafting of research reports (51.0%), and production of patient information material (41.2%). Most respondents expressed the opinion that ChatGPT could enhance learning experiences (76.0%) and improve clinical care efficiency (55.7%). However, over half of the respondents voiced concerns about the risk of potential errors or misdiagnoses (56.2%). About one-fourth of respondents (23.6%) expressed the concern that ChatGPT could replace healthcare professionals in performing certain clinical tasks (Table 2).
Table 2.
Knowledge, attitude, and utilization of ChatGPT among medical and allied health students, Kano, Nigeria (N = 420)
| Frequency n (%) |
|
|---|---|
| Awareness about ChatGPT | |
| Heard of ChatGPT | 420 (100.0) |
| Sources of information about ChatGPT | |
| Friends/peers | 227 (54.1) |
| Social media | 163 (38.8) |
| Internet/online forum/blogs | 19 (4.5) |
| University/College of Health Sciences | 11 (2.6) |
| Knowledge and perceived capabilities of ChatGPT and AI applications | |
| Understands and generate human-like text responses | 372 (88.6) |
| Can provide medical advice and diagnoses | 266 (63.3) |
| Can provide answers to medical questions | 207 (49.3) |
| Can assist with literature review | 254 (60.5) |
| Can generate patient education materials | 173 (41.2) |
| Can aid in drafting research reports | 214 (51.0) |
| ChatGPT can enhance learning experiences | 319 (76.0) |
| ChatGPT can enhance communication between healthcare professionals and patients | 134 (31.9) |
| ChatGPT can aid in medical information retrieval | 138 (32.9) |
| ChatGPT could improve the efficiency of clinical care processes | 234 (55.7) |
| Overreliance on ChatGPT might lead to errors or misdiagnoses | 236 (56.2) |
| ChatGPT could replace human medical professionals in certain clinical tasks | 99 (23.6) |
| Overall knowledge of ChatGPT | |
| Good | 65 (15.5) |
| Moderate | 260 (61.9) |
| Poor | 95 (22.6) |
| Willingness to use ChatGPT in clinical care | |
| To obtain quick information about medical conditions or treatment options | 100 (23.8) |
| To apply it to assist with patient education in layman’s terms | 133 (31.7) |
| To deploy ChatGPT for generating differential diagnoses | 168 (40.0) |
| Overall attitude toward ChatGPT and AI-powered applications | |
| Positive | 293 (69.8) |
| Negative | 59 (14.1) |
| Neutral | 68 (16.2) |
| Previous use of ChatGPT | |
| Ever used ChatGPT for academic purposes | 260 (61.9) |
| Regular use of ChatGPT | 156 (37.1) |
| Tasks for which ChatGPT was used | |
| -Generating study notes | 57 (13.6) |
| -Writing assignments | 53 (12.6) |
| -Answering academic queries | 35 (8.3) |
| -Research assistance | 25 (6.0) |
| -Examination preparation | 24 (5.7) |
| Satisfaction with ChatGPT | |
| Satisfied with accuracy and usefulness of ChatGPT so far | 237 (56.4) |
| Will recommend ChatGPT to friends and colleagues | 219 (52.1) |
Knowledge and Attitude Toward ChatGPT
Overall, 15.5% (n = 65), 61.9% (n = 260), and 22.6% (n = 95) of the respondents demonstrated good, moderate, and poor knowledge of ChatGPT, respectively. Knowledge did not differ by faculty of study (p > 0.05) (Fig. 1). Similarly, 69.8% (n = 293) of respondents displayed a positive attitude toward ChatGPT, while 14.1% (n = 59) and 16.2% (n = 68) held negative and neutral attitudes, respectively. Respondents’ attitude was similar across faculties (p > 0.05) (Fig. 2). Concerning application to clinical care, less than one-half of the respondents (40.0%) would deploy ChatGPT to generate differential diagnoses. Approximately one-quarter (23.8%) would use ChatGPT to retrieve medical information and treatment options, while nearly one-third (31.7%) would use ChatGPT to assist with patient education (Table 2).
Fig. 1.
Knowledge of ChatGPT among medical and allied health students, Kano, Nigeria, 2023
Fig. 2.
Attitude toward ChatGPT among medical and allied health students, Kano, Nigeria, 2023
Previous ChatGPT Use
The majority of respondents (61.9%, n = 260) had previously used ChatGPT for academic purposes, with 37.1% (n = 156) considering themselves regular users. More than one in ten respondents had used ChatGPT to generate study notes (13.6%) and write assignments (12.6%). Previous ChatGPT use was similar among students from the three faculties (p > 0.05) (Fig. 3).
Fig. 3.
Ever use of ChatGPT among medical and allied health students, Kano, Nigeria, 2023
Motivations and Concerns About ChatGPT Adoption in Medical Education
The top three factors motivating respondents to utilize ChatGPT in medical and health-related education include ease of use (75.0%), efficiency/time saving (72.1%), and peer recommendation (49.8%). Other motivating factors include the accuracy of responses (34.0%) and trust in AI technology (33.8%) (Table 3).
Table 3.
Motivations and concerns about ChatGPT in medical education, Kano, Nigeria (N = 420)
| Frequency n (%) |
|
|---|---|
| Motivations for ChatGPT use | |
| Ease of use | 315 (75.0) |
| Efficiency/Time saving | 302 (72.1) |
| Peer recommendation | 209 (49.8) |
| Accuracy of response | 143 (34.0) |
| Trust in AI technology | 142 (33.8) |
| Concerns about ChatGPT and AI applications | |
| Dependency on ChatGPT affecting critical thinking and clinical reasoning | 273 (65.0) |
| Inaccurate and misleading medical information | 209 (49.7) |
| Reliability of the information generated | 207 (49.3) |
| Ethical concerns about fabricated references | 175 (41.7) |
| ChatGPT’s potential to replace traditional medical learning resources | 142 (33.8) |
| Data privacy and security when interacting with ChatGPT | 109 (26.0) |
| Feels it is important to address ChatGPT-related ethical concerns | 175 (41.6) |
Respondents expressed concerns about deploying ChatGPT and related AI in medical and health-related fields, with the foremost worry being the risk of dependency eroding critical thinking and clinical reasoning skills (65.0%). Additional concerns included the potential for inaccurate or misleading information (49.7%), the reliability of the generated information (49.3%), ethical issues related to inaccurately acknowledging original sources (41.7%), the replacement of traditional learning resources (33.8%), and considerations of data privacy and security (26.0%). A substantial proportion of respondents (41.6%) emphasized the importance of addressing ethical issues associated with the use of ChatGPT.
Predictors of ChatGPT Use
At the bivariate level, utilization of ChatGPT was associated with respondents’ sex, age group, year of study, knowledge, and attitude (p < 0.05). The same factors remained significant predictors of ChatGPT use at the multivariate level. Specifically, male students demonstrated a 62% higher likelihood of using ChatGPT compared to females (adjusted odds ratio (aOR) = 1.62, 95% confidence interval (95%CI) 1.13–3.72). Similarly, students in the oldest age cohort (≥ 25 years) exhibited a 74% increased odds of ChatGPT use compared to the youngest cohort (< 20 years) (aOR = 1.74, 95%CI 1.16–4.50). Final-year students (year 6) had a more than twofold increased likelihood of ChatGPT use compared to new (year 1) students (aOR = 2.46, 95%CI 1.12–5.67). Students with good understanding of ChatGPT were more than three times as likely to have used the chatbot (aOR = 3.27, 95%CI 1.59–7.36). Finally, students with a positive attitude toward ChatGPT had a fourfold increased chance of using ChatGPT (aOR = 4.29, 95%CI 1.92–8.56) (Table 4).
Table 4.
Logistic regression model for predictors of ChatGPT utilization among medical and allied health students, Kano, Nigeria (n = 420)
| Characteristics | N | Proportion of students who have used ChatGPT No. (%) |
p-value | Crude OR (95%CI) | Adjusted OR (95%CI) | p-value |
|---|---|---|---|---|---|---|
| Sex | < 0.001* | |||||
| Male | 257 | 179 (69.7) | 2.32 (1.55–3.49) | 1.62 (1.13–3.72) | 0.024* | |
| Female | 163 | 81 (49.7) | Referent | Referent | ||
| Age group | 0.042* | |||||
| < 20 | 76 | 38 (50.0) | Referent | Referent | ||
| 20–24 | 182 | 115 (63.2) | 1.72 (1.13–2.95) | 1.21 (1.11–4.53) | 0.031* | |
| ≥ 25 | 162 | 107 (66.1) | 1.95 (1.12–3.39) | 1.74 (1.16–4.50) | 0.029* | |
| Faculty/course of study | 0.79 | |||||
| Clinical sciences (medical students) | 150 | 91 (60.7) | – | – | ||
| Dentistry | 75 | 45 (60.0) | – | – | ||
| Allied health sciences | 195 | 124 (63.6) | – | – | ||
| Year of study | 0.041* | |||||
| Year 1 | 53 | 23 (43.4) | Referent | Referent | ||
| Year 2 | 78 | 47 (60.3) | 1.98 (1.17–4.01) | 1.71 (1.16–4.22) | 0.037* | |
| Year 3 | 75 | 46 (61.3) | 2.07 (1.03–4.23) | 1.96 (1.14–4.19) | 0.016* | |
| Year 4 | 79 | 54 (68.4) | 2.81 (1.37–5.79) | 2.37 (1.24–5.38) | 0.023* | |
| Year 5 | 87 | 57 (65.5) | 2.48 (1.23–4.99) | 2.15 (1.21–4.28) | 0.036* | |
| Year 6 | 48 | 33 (68.8) | 2.87 (1.27–6.50) | 2.46 (1.12–5.67) | 0.015* | |
| Knowledge of ChatGPT | < 0.001* | |||||
| Good | 65 | 53 (81.5) | 3.81 (1.81–8.03) | 3.27 (1.59–7.36) | 0.011* | |
| Moderate | 260 | 156 (60.0) | 2.94 (1.50–5.78) | 2.43 (1.32–4.67) | 0.027* | |
| Poor | 95 | 51 (53.7) | Referent | Referent | ||
| Attitude toward ChatGPT | < 0.001* | |||||
| Positive | 293 | 257 (87.7) | 7.2 (2.33–9.26) | 4.29 (1.92–8.56) | 0.018* | |
| Negative | 127 | 3 (2.36) | Referent | Referent |
*Significant at p < 0.05. OR odds ratio, CI confidence interval
Hosmer–Lemeshow chi-square = 11.7, p = 0.12
The logistic model includes the following variables: respondent’s sex, age group, year of study, knowledge of ChatGPT, and attitude toward ChatGPT
Qualitative Findings
In the exploration of attitudes of medical and allied health students regarding the integration of ChatGPT into medical education, a diverse array of responses emerged, capturing positive outlooks, ambivalence, concerns, and reservations. There were reflections on usage scenarios, challenges, and potential risks associated with artificial intelligence, particularly ChatGPT, in the context of medical education, clinical instruction, and research.
Positive Attitudes
Most participants displayed a positive stance, viewing ChatGPT as a tool capable of enriching the learning experience in the medical field. Participants lauded its role in simplifying complex medical concepts and making information more accessible in a timely manner, emphasizing its potential to expedite learning and research processes. One participant expressed it thus:
"ChatGPT enables easy understanding of complex medical and health topics. It makes learning efficient by swiftly providing information. It provides answers very fast and saves time." (23-year-old female)
Despite recognizing its utility, some participants adopted a nuanced perspective, acknowledging its limitations and the need for a balanced approach.
"I am 100% in support. Because I have used it and have seen what it can do and even though it has limitations, someone has to use his brains also... It is very useful, but you cannot rely on it 100%." (26-year-old male)
Concerns and Reservations
Contrary to the prevailing positivity, concerns were raised about the potential impact of ChatGPT on learning habits. Some participants expressed apprehensions about the tool fostering laziness and hindering critical thinking:
"Actually, it (ChatGPT) is not a bad idea, but what I think is that it will make some students lazy... The main danger is in making students less critical in exploring the different sources of knowledge." (25-year-old female)
Moreover, concerns were voiced about ChatGPT’s lack of specificity in responding to some medical terms and its potential to provide misleading information or fabricated references.
"Yes, there is concern, ChatGPT does not provide references and is not always specific... you have to get references or risk using fabricated references resulting in plagiarism." (23-year-old female)
Usage Scenarios
Respondents identified specific scenarios where ChatGPT could prove beneficial, including aiding in idea generation, presentation outline production, quick information retrieval, and organizational tasks. However, a consensus emerged that ChatGPT could not replace essential practical aspects of medical practice.
"Yes, it (ChatGPT) can help when you are having difficulty in making a diagnosis... However, clinical skills, a listening ear, empathy, and human emotions are lacking." (24-year-old female)
Challenges and Potential Risks
Several challenges were highlighted, including concerns about the accuracy and currency of information provided by ChatGPT. Participants expressed worries about students becoming overly dependent on the tool, potential ethical issues, and the impact on patient care and safety.
"First, my concern is that you cannot get up-to-date information beyond 2021... and it may give you wrong or irrelevant information." (26-year-old male)
Practical limitations, such as the need for a consistent internet connection and power supply, were also acknowledged.
"Yes, there is the risk of plagiarism... there is a need to upgrade ChatGPT to enable it to provide accurate references by linking it to publication databases on the internet." (24-year-old female)
Discussion
We evaluated the awareness of, attitudes toward, and extent of ChatGPT use among medical and allied health students in Nigeria. We found almost universal awareness of ChatGPT, moderate to good knowledge, and positive attitudes toward ChatGPT among two-thirds of respondents. Factors influencing ChatGPT use included respondents' sex, age, year of study, knowledge, and attitude. While potential benefits to medical education were recognized, concerns were raised about risk of dependency, lack of specificity, ethical issues, engendering laziness, misinformation, potential errors, and erosion of critical thinking skills. Qualitative insights elaborated on these concerns, with students emphasizing challenges such as dependency, misinformation, and the erosion of critical thinking, reinforcing similar concerns raised in the survey.
The universal awareness of ChatGPT among Nigerian students contrasts with findings in Ghana, where respondents had limited knowledge about ChatGPT and AI-powered chatbots [30], yet aligns with reports from India [31], Malaysia [13], and other parts of Africa, such as Sudan [32]. This suggests a regional variation in AI exposure and familiarity. Our findings also support prior research on AI in medical education, confirming peers and social media as primary information sources about ChatGPT, which was similarly observed in Asia [26] and the USA [33].
While moderate to good knowledge levels were noted in our study, they also highlight a gap between theoretical understanding and practical exposure, with only one-third of participants having hands-on experience with AI tools. This disconnect mirrors findings from Sudan [32] and other low-resource settings, where access to AI tools and structured training is limited due to infrastructure and institutional barriers. The structural challenges in Nigeria [34], such as erratic internet and electricity, resonate with barriers observed in other low-resource environments, such as Sudan [32] but differ from the better infrastructure access reported in countries like Taiwan and the UAE, which facilitates more practical AI engagement [35, 36]. Qualitative findings added depth, with students expressing that their theoretical understanding of AI often exceeded their practical exposure due to limited access to AI tools and training opportunities. Interview data highlighted structural barriers, including unreliable internet, erratic electricity, and insufficient institutional support, which hindered practical adoption.
Participants’ positive attitudes toward ChatGPT align with broader trends across regions, where AI is recognized for enhancing learning experiences in medical and allied health fields [39]. Qualitative findings underscored the value students place on ChatGPT for simplifying complex medical concepts and enhance research efficiency, consistent with reports from Asia and the USA [26, 33]. However, concerns raised about dependency, misinformation, and erosion of critical thinking skills were echoed by studies in both low-resource contexts, such as Ghana [30] and high-resource settings, such as the USA [33]. Students in Nigeria, like their counterparts in other regions, emphasized that while ChatGPT could expedite learning, it could not replace hands-on learning, clinical skills, and empathy, factors that are central to medical practice [26, 34].
A majority of medical and allied health students (78%) reported being aware of generative AI application in healthcare, which indicates a widespread theoretical understanding of AI’s potential. However, only one-third (34%) had any practical exposure to AI tools, highlighting a significant gap between awareness and actual hands-on experience. Interview data revealed that students felt unprepared to engage with AI technologies due to minimal integration of AI into the curriculum and insufficient institutional support, reflecting a disconnect between theoretical knowledge and practical application. This disconnect suggests that while there is enthusiasm for AI, structural barriers such as limited access to AI resources and training opportunities may be inhibiting practical adoption.
Most students had used ChatGPT for academic purposes, aligning with trends observed in AI adoption for educational tasks [40]. Distinct scenarios for beneficial ChatGPT use were identified, including idea generation, presentation outline production, quick information retrieval, and organizational tasks. However, qualitative insights emphasized that while ChatGPT was useful for academic purposes, it could not substitute critical components of medical education, such as interpersonal skills, hands-on patient care, empathy, and clinical reasoning skills.
Ethical concerns such as data privacy and the implications of AI in clinical settings were also raised. These findings mirror those from high-resource settings such as the USA and the UAE [33, 36], where concerns about patient safety and privacy are paramount [46]. However, the specific context of limited infrastructure and institutional guidelines in Nigeria may exacerbate these concerns, particularly around ensuring responsible use. Interview findings suggested that clearer institutional policies and improved technological access are required to integrate ChatGPT and other AI technologies responsibly into medical education.
Predictors of ChatGPT use included respondents' sex, age, year of study, knowledge, and attitude, with males and older students more likely to use the tool. This trend is consistent with global literature from both high-resource settings such as the USA [33] and low-resource contexts, such as Ghana and Sudan [30, 32], which indicates sex and age differences in educational technology adoption [34, 35]. Knowledge of ChatGPT and a positive attitude toward it emerged as strong predictors of usage, with these factors similarly reported as key determinants of AI adoption across different educational contexts globally [39, 45]. Academic progression was another predictor, as final-year students demonstrated over a twofold increased likelihood of ChatGPT use compared to new entrants. This finding resonates with the academic demands of senior students, suggesting that ChatGPT becomes a more valuable resource for complex queries and advanced learning needs as students progress in their studies [43, 45]. Qualitative data further supported these quantitative trends, with students indicating that they used ChatGPT more frequently as their academic workload and complexity increased, but also expressing concerns about reliance on the tool for advanced tasks.
This study provides important insights into the adoption of AI in medical education within a low-resource context, specifically Nigeria. While the findings reflect a high level of awareness among medical and allied health students regarding AI’s potential, the gap between theoretical knowledge and practical applications remains a significant challenge. This gap is largely due to insufficient practical training, limited curriculum integration, and a lack of access to necessary AI tools and infrastructure. Specific challenges such as unreliable internet connectivity, inadequate access to AI software, and insufficient faculty training further hinder the widespread adoption of AI in medical education. Addressing these issues requires targeted educational reforms that include dedicated AI training modules, enhanced faculty development programs, and investments in technology infrastructure. Our findings highlight the need for action on the part of educational policymakers and institutions to equip future healthcare professionals with the AI skills necessary for modern medical practice. By integrating qualitative insights, this study underscores the importance of understanding the lived experiences of students in shaping AI adoption strategies. Future research should explore how institutional collaborations can enhance AI access in medical schools, ensuring that Nigeria’s healthcare workforce remains competitive in an increasingly AI-driven global landscape.
The study’s strengths include a mixed-methods design and a robust sample size providing comprehensive insights. However, this study was conducted at a single university in Nigeria, which limits the generalizability of the findings to other regions or institutions both within the country and in similar low-resource settings. The cross-sectional design of the study also restricts our ability to establish causality between the identified predictors and ChatGPT use. Longitudinal studies are recommended to better understand how attitudes to ChatGPT and ChatGPT usage evolve over time and to provide a more robust analysis of the relationship between these factors.
Conclusion
This study contributes nuanced perspectives to the existing literature on AI in medical education. While aligning with previous findings on awareness and positive attitudes, the balance between the benefits of AI tools and the preservation of critical thinking skills and ethical considerations will be instrumental in shaping the future of medical and health professional education in similar settings.
Author Contribution
ZI, HO, BI, TG, HA, AA, AK: writing — review and editing, writing — original draft, visualization, validation, supervision, software, methodology, investigation, formal analysis, data curation, conceptualization.
TD, FT, AJ: writing — review and editing, software, resources, methodology, data curation.
HB, HS, MA: writing — review and editing, resources, methodology, investigation.
Funding
This work was supported by the Fogarty International Center (FIC) of the U.S. National Institutes of Health (NIH) award number 1R25TW012715. The findings and conclusions are those of the authors and do not necessarily represent the official position of the FIC, NIH, the Department of Health and Human Services, or the government of the United States of America.
Data Availability
Data requests will be considered only in strict accordance with Nigerian data privacy rules and on a case-by-case basis.
Declarations
Conflict of Interest
The authors declare no competing interests.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Nazir A, Wang Z. A comprehensive survey of ChatGPT: advancements, applications, prospects, and challenges. Meta Radiol. 2023;1(2):100022. 10.1016/j.metrad.2023.100022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Homolak J. Opportunities and risks of ChatGPT in medicine, science, and academic publishing: a modern Promethean dilemma. Croat Med J. 2023;64(1):1–3. 10.3325/cmj.2023.64.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Yu H. The application and challenges of ChatGPT in educational transformation: new demands for teachers’ roles. Heliyon. 2024;10(2):e24289. 10.1016/j.heliyon.2024.e24289. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
- 4.Stoumpos AI, Kitsios F, Talias MA. Digital transformation in healthcare: technology acceptance and its applications. Int J Environ Res Public Health. 2023;20(4):3407. 10.3390/ijerph20043407. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Jeyaraman M, Priya KS, Jeyaraman N, Nallakumarasamy A, Yadav S, Bondili SK. ChatGPT in medical education and research: a boon or a bane? Cureus. 2023;15(8):e44316. 10.7759/cureus.44316. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Khan B, Fatima H, Qureshi A, Kumar S, Hanan A, Hussain J, Abdullah S. Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomed Mater Devices. 2023;8:1–8. 10.1007/s44174-023-00063-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Umapathy VR, Rajinikanth BS, Samuel Raj RD, Yadav S, Munavarah SA, Anandapandian PA, Mary AV, Padmavathy K, Akshay R. Perspective of artificial intelligence in disease diagnosis: a review of current and future endeavours in the medical field. Cureus. 2023;15(9):e45684. 10.7759/cureus.45684. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Zhang P, Kamel Boulos MN. Generative AI in medicine and healthcare: promises, opportunities and challenges. Future Internet. 2023;15(9):286. 10.3390/fi15090286. [Google Scholar]
- 9.Park SH, Do KH, Kim S, Park JH, Lim YS. What should medical students know about artificial intelligence in medicine? J Educ Eval Health Prof. 2019;16:18. 10.3352/jeehp.2019.16.18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Guze PA. Using technology to meet the challenges of medical education. Trans Am Clin Climatol Assoc. 2015;126:260–70. [PMC free article] [PubMed] [Google Scholar]
- 11.Nguyen A, Ngo HN, Hong Y, Dang B, Nguyen BT. Ethical principles for artificial intelligence in education. Educ Inf Technol (Dordr). 2023;28(4):4221–41. 10.1007/s10639-022-11316-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Buldur M, Sezer B. Evaluating the accuracy of Chat Generative Pre-trained Transformer version 4 (ChatGPT-4) responses to United States Food and Drug Administration (FDA) frequently asked questions about dental amalgam. BMC Oral Health. 2024;24(1):605. 10.1186/s12903-024-04358-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.George Pallivathukal R, Kyaw Soe HH, Donald PM, Samson RS, Hj Ismail AR. ChatGPT for academic purposes: survey among undergraduate healthcare students in Malaysia. Cureus. 2024;16(1):e53032. 10.7759/cureus.53032. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Chan CKY, Hu W. Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int J Educ Technol High Educ. 2023;20:43–8. 10.1186/s41239-023-00411-8. [Google Scholar]
- 15.Mohammad Amini M, Jesus M, Fanaei Sheikholeslami D, Alves P, Hassanzadeh Benam A, Hariri F. Artificial intelligence ethics and challenges in healthcare applications: a comprehensive review in the context of the European GDPR mandate. Mach Learn Knowl Extr. 2023;5(3):1023–35. 10.3390/make5030053. [Google Scholar]
- 16.Ejaz H, McGrath H, Wong BL, Guise A, Vercauteren T, Shapey J. Artificial intelligence and medical education: a global mixed-methods study of medical students’ perspectives. Digit Health. 2022;2(8):20552076221089100. 10.1177/20552076221089099. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Sallam M, Salim NA, Barakat M, Al-Mahzoum K, Al-Tammemi AB, Malaeb D, Hallit R, Hallit S. Assessing health students’ attitudes and usage of ChatGPT in Jordan: validation study. JMIR Med Educ. 2023;5(9):e48254. 10.2196/48254. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Abubakar I, Dalglish SL, Angell B, Sanuade O, Abimbola S, Adamu AL, Adetifa IMO, Colbourn T, Ogunlesi AO, Onwujekwe O, Owoaje ET, Okeke IN, Adeyemo A, Aliyu G, Aliyu MH, Aliyu SH, Ameh EA, Archibong B, Ezeh A, Gadanya MA, Ihekweazu C, Ihekweazu V, Iliyasu Z, Kwaku Chiroma A, Mabayoje DA, Nasir Sambo M, Obaro S, Yinka-Ogunleye A, Okonofua F, Oni T, Onyimadu O, Pate MA, Salako BL, Shuaib F, Tsiga-Ahmed F, Zanna FH. The Lancet Nigeria Commission: investing in health and the future of the nation. Lancet. 2022;399(10330):1155–200. 10.1016/S0140-6736(21)02488-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Jeyaraman M, Balaji S, Jeyaraman N, Yadav S. Unraveling the ethical enigma: artificial intelligence in healthcare. Cureus. 2023;15(8):e43262. 10.7759/cureus.43262. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Wang X, Sanders HM, Liu Y, Seang K, Tran BX, Atanasov AG, Qiu Y, Tang S, Car J, Wang YX, Wong TY, Tham YC, Chung KC. ChatGPT: promise and challenges for deployment in low- and middle-income countries. Lancet Reg Health West Pac. 2023;15(41):100905. 10.1016/j.lanwpc.2023.100905. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Abdaljaleel M, Barakat M, Alsanafi M, Salim NA, Abazid H, Malaeb D, Mohammed AH, Hassan BAR, Wayyes AM, Farhan SS, Khatib SE, Rahal M, Sahban A, Abdelaziz DH, Mansour NO, AlZayer R, Khalil R, Fekih-Romdhane F, Hallit R, Hallit S, Sallam M. A multinational study on the factors influencing university students’ attitudes and usage of ChatGPT. Sci Rep. 2024;14(1):1983. 10.1038/s41598-024-52549-8. (Erratum in: Sci Rep. 2024 Apr 9;14(1):8281). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Labrague LJ, Aguilar-Rosales R, Yboa BC, Sabio JB. Factors influencing student nurses’ readiness to adopt artificial intelligence (AI) in their studies and their perceived barriers to accessing AI technology: a cross-sectional study. Nurse Educ Today. 2023;130:105945. 10.1016/j.nedt.2023.105945. [DOI] [PubMed] [Google Scholar]
- 23.Lwanga SK, Lemeshow S. Sample size determination in health studies: a practical manual. World Health Org. 1991;29–32. 10.2307/2290547
- 24.Mohammed M, Kumar N, Zawiah M, Al-Ashwal FY, Bala AA, Lawal BK, Wada AS, Halboup A, Muhammad S, Ahmad R, Sha’aban A. Psychometric properties and assessment of knowledge, attitude, and practice towards ChatGPT in pharmacy practice and education: a study protocol. J Racial Ethn Health Disparities. 2023. 10.1007/s40615-023-01696-1. [DOI] [PubMed]
- 25.Mosleh R, Jarrar Q, Jarrar Y, Tazkarji M, Hawash M. Medicine and pharmacy students’ knowledge, attitudes, and practice regarding artificial intelligence programs: Jordan and West Bank of Palestine. Adv Med Educ Pract. 2023;13(14):1391–400. 10.2147/AMEP.S433255. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Tangadulrat P, Sono S, Tangtrakulwanich B. Using ChatGPT for clinical practice and medical education: cross-sectional survey of medical students’ and physicians’ perceptions. JMIR Med Educ. 2023;22(9): e50658. 10.2196/50658. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Alabdullah MN, Alabdullah H, Kamel S. Knowledge, attitude, and practice of evidence-based medicine among resident physicians in hospitals of Syria: a cross-sectional study. BMC Med Educ. 2022;22(1):785. 10.1186/s12909-022-03840-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Katz MH. Multivariable analysis-a practical guide for clinicians and public health researchers. Cambridge, U.K.: Cambridge University Press; 2011. [Google Scholar]
- 29.Hosmer DW, Lemeshow S. Applied logistic regression. New York: Wiley; 2013. [Google Scholar]
- 30.Adarkwah MA, Amponsah S, van Wyk MM, Huang R, Tlili A, Shehata B, Metwally AHS, Wang H. Awareness and acceptance of ChatGPT as a generative conversational AI for transforming education by Ghanaian academics: a two-phase study. J Adv Learn Technol. 2023;6(2):26. 10.37074/jalt.2023.6.2.26. [Google Scholar]
- 31.Ghosh A, Maini Jindal N, Gupta VK, Bansal E, Kaur Bajwa N, Sett A. Is ChatGPT’s knowledge and interpretative ability comparable to first professional MBBS (Bachelor of Medicine, Bachelor of Surgery) students of India in taking a medical biochemistry examination? Cureus. 2023;15(10):e47329. 10.7759/cureus.47329. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Jaber Amin MH, Mohamed Elhassan Elmahi MA, Abdelmonim GA, Fadlalmoula GA, Jaber Amin JH, Khalid Alrabee NH, Awad MH, Mohamed Omer ZY, Abu Dayyeh NTI, Hassan Abdalkareem NA, Meisara Seed Ahmed EMO, Hassan Osman HA, Mohamed HAO, Mohamedtoum Babiker AE, Diab Alnour AA, Mohamed Ahmed EA, Elamin Garban EH, Ali Mohammed NS, Mohamed Ahmed KAH, Beig MA, Shafique MA, Mohamed Elhag MG, Elfakey Omer MM, Abuzaid Ali AA, Mohamed Shatir DH, Ali MohamedElhassan HO, Bin Saleh KHA, Ali MB, Elzber Abdalla SS, Alhaj WM, Khalil Mergani ES, Mohammed HH. Knowledge, attitude, and practice of artificial intelligence among medical students in Sudan: a cross-sectional study. Ann Med Surg (Lond). 2024;86(7):3917–3923. 10.1097/MS9.0000000000002070. [DOI] [PMC free article] [PubMed]
- 33.Zhang JS, Yoon C, Williams DKA, Pinkas A. Exploring the usage of ChatGPT among medical students in the United States. J Med Educ Curric Dev. 2024;25(11):23821205241264696. 10.1177/23821205241264695. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Oluwadiya KS, Adeoti AO, Agodirin SO, Nottidge TE, Usman MI, Gali MB, Onyemaechi NO, Ramat AM, Adedire A, Zakari LY. Exploring artificial intelligence in the Nigerian medical educational space: an online cross-sectional study of perceptions, risks and benefits among students and lecturers from ten universities. Niger Postgrad Med J. 2023;30(4):285–92. 10.4103/npmj.npmj_186_23. [DOI] [PubMed] [Google Scholar]
- 35.Hu JM, Liu FC, Chu CM, Chang YT. Health care trainees’ and professionals’ perceptions of ChatGPT in improving medical knowledge training: rapid survey study. J Med Internet Res. 2023;18(25):e49385. 10.2196/49385. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Alkhaaldi SMI, Kassab CH, Dimassi Z, Oyoun Alsoud L, Al Fahim M, Al Hageh C, Ibrahim H. Medical student experiences and perceptions of ChatGPT and artificial intelligence: cross-sectional study. JMIR Med Educ. 2023;22(9):e51302. 10.2196/51302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Arif TB, Munaf U, Ul-Haque I. The future of medical education and research: is ChatGPT a blessing or blight in disguise? Med Educ Online. 2023;28(1):2181052. 10.1080/10872981.2023.2181052. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Hasanein AM, Sobaih AEE. Drivers and consequences of ChatGPT use in higher education: key stakeholder perspectives. Eur J Investig Health Psychol Educ. 2023;13(11):2599–614. 10.3390/ejihpe13110181. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare (Basel). 2023;11(6):887. 10.3390/healthcare11060887. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Farhi F, Jeljeli R, Aburezeq I, Dweikat FF, Al-shami SA, Slamene R. Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Comput Educ Artif Intell. 2023;5:100180. 10.1016/j.caeai.2023.100180. [Google Scholar]
- 41.Nouraldeen RM. The impact of technology readiness and use perceptions on students’ adoption of artificial intelligence: the moderating role of gender. Dev Learn Organiz. 2023;37(3):7–10. 10.1108/DLO-07-2022-0133. [Google Scholar]
- 42.Shorey S, Mattar C, Pereira TL, Choolani M. A scoping review of ChatGPT’s role in healthcare education and research. Nurse Educ Today. 2024;135:106121. 10.1016/j.nedt.2024.106121. [DOI] [PubMed] [Google Scholar]
- 43.Jo H, Bang Y. Analyzing ChatGPT adoption drivers with the TOEK framework. Sci Rep. 2023;13(1):22606. 10.1038/s41598-023-49710-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Menon D, Shilpa K. “Chatting with ChatGPT”: analyzing the factors influencing users’ intention to Use the Open AI’s ChatGPT using the UTAUT model. Heliyon. 2023;9(11):e20962. 10.1016/j.heliyon.2023.e20962. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Liu J, Wang C, Liu S. Utility of ChatGPT in clinical practice. J Med Internet Res. 2023;28(25):e48568. 10.2196/48568. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Peacock J, Austin A, Shapiro M, Battista A, Samuel A. Accelerating medical education with ChatGPT: an implementation guide. MedEdPublish (2016). 2023;13:64. 10.12688/mep.19732.2. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data requests will be considered only in strict accordance with Nigerian data privacy rules and on a case-by-case basis.



