Skip to main content
PLOS One logoLink to PLOS One
. 2025 Jan 9;20(1):e0313776. doi: 10.1371/journal.pone.0313776

Widespread use of ChatGPT and other Artificial Intelligence tools among medical students in Uganda: A cross-sectional study

Elizabeth Ajalo 1,, David Mukunya 1,, Ritah Nantale 1,*, Frank Kayemba 1, Kennedy Pangholi 1, Jonathan Babuya 1, Suzan Langoya Akuu 2, Amelia Margaret Namiiro 3, Yakobo Baddokwaya Nsubuga 4, Joseph Luwaga Mpagi 5, Milton W Musaba 6, Faith Oguttu 1, Job Kuteesa 7, Aloysius Gonzaga Mubuuke 8, Ian Guyton Munabi 9, Sarah Kiguli 3
Editor: Timothy Omara10
PMCID: PMC11717177  PMID: 39787055

Abstract

Background

Chat Generative Pre-trained Transformer (ChatGPT) is a 175-billion-parameter natural language processing model that uses deep learning algorithms trained on vast amounts of data to generate human-like texts such as essays. Consequently, it has introduced new challenges and threats to medical education. We assessed the use of ChatGPT and other AI tools among medical students in Uganda.

Methods

We conducted a descriptive cross-sectional study among medical students at four public universities in Uganda from 1st November 2023 to 20th December 2023. Participants were recruited by stratified random sampling. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and use of AI tools such as ChatGPT. Our outcome variable was use of AI tools. Data were analyzed descriptively in Stata version 17.0. We conducted a modified Poisson regression to explore the association between use of AI tools and various exposures.

Results

A total of 564 students participated. Almost all (93%) had heard about AI tools and more than two-thirds (75.7%) had ever used AI tools. Regarding the AI tools used, majority (72.2%) had ever used ChatGPT, followed by SnapChat AI (14.9%), Bing AI (11.5%), and Bard AI (6.9%). Most students use AI tools to complete assignments (55.5%), preparing for tutorials (39.9%), preparing for exams (34.8%) and research writing (24.8%). Students also reported the use of AI tools for nonacademic purposes including emotional support, recreation, and spiritual growth. Older students were 31% less likely to use AI tools compared to younger ones (Adjusted Prevalence Ratio (aPR):0.69; 95% CI: [0.62, 0.76]). Students at Makerere University were 66% more likely to use AI tools compared to students in Gulu University (aPR:1.66; 95% CI:[1.64, 1.69]).

Conclusion

The use of ChatGPT and other AI tools was widespread among medical students in Uganda. AI tools were used for both academic and non-academic purposes. Younger students were more likely to use AI tools compared to older students. There is a need to promote AI literacy in institutions to empower older students with essential skills for the digital age. Further, educators should assume students are using AI and adjust their way of teaching and setting exams to suit this new reality. Our research adds further evidence to existing voices calling for regulatory frameworks for AI in medical education.

Introduction

In November 2022, Open AI launched an artificial intelligence-powered chat box called Chat Generative Pre-trained Transformer (ChatGPT) [1]. Unlike previous versions of internet searching, ChatGPT is a 175-billion-parameter natural language processing model that uses deep learning algorithms trained on vast amounts of data to generate human-like texts such as essays, abstracts, manuscripts, PowerPoint presentations, and book summaries among others [2]. As such, it has drawn unprecedented attention from the academic community [3]. ChatGPT became the fastest-growing user application in history, reaching 100 million active users as of January 2023, just two months after its launch, and recorded 14.6 billion visits in its first year [4].

ChatGPT is a powerful tool with the potential to transform medical education [4, 5]. A recent study by Aidan et al., evaluated ChatGPT’s potential as a medical education tool and revealed that the model achieves the equivalent of a passing score for a third-year medical student [2]. Similarly, a study by Kung et al. showed that ChatGPT passes the United States Medical Licensing Exam (USMLE) with a score in the 60th percentile [6]. Further, ChatGPT can be used for personalized learning, research assistance, generating case scenarios, clinical decision-making, creating content to facilitate learning, and language translation [710].

However, ChatGPT has introduced new challenges and threats to education [8, 1012]. In medical education, studies have revealed various concerns including issues with academic integrity, data accuracy, potential detriments to learning, plagiarism, privacy, and security concerns [9, 1316]. Currently, there is no reliable way to differentiate between ChatGPT written text and human-written text. This poses a great challenge to educators when marking essays. When used responsibly; ChatGPT has potential to revolutionize medical education. AI tools such as ChatGPT are valuable supplementary resources, more especially in areas where access to up-to-date medical textbooks and academic materials may be limited due to resource constraints [17]. Additionally, AI tools can address faculty shortages, support research, and innovation, and advance critical thinking skills [17, 18]. To fully utilize these benefits, there is an urgent need to develop institutional AI policies and guidelines, thereby harnessing the advantages while mitigating associated risks in medical education [17]. Universities in some high-income countries have developed guidelines to guide the use of AI in education. However, institutions in Uganda and many other LMICs have no guidelines on how to use ChatGPT and other similar Artificial Intelligence (AI) tools [19, 20]. To develop such guidance; we need to know: 1) what proportion of medical students are using ChatGPT and other AI tools; 2) what are the common AI tools used by medical students; 3) what are these AI tools being used for; and 4) what factors are associated with the use of AI tools among medical students.

Materials and methods

Study design

This was a descriptive cross-sectional study aimed at assessing the use of ChatGPT and other AI tools among medical students in Uganda.

Study setting

The study was conducted at medical schools of four public universities in Uganda from 1st November 2023 to 20th December 2023. These included Busitema University, Makerere University, Mbarara University of Science and Technology, and Gulu University. These universities were selected because they are the largest and oldest public universities that offer undergraduate medical degrees in Uganda.

Busitema University medical school was established in 2013 and is located in Mbale city, eastern Uganda. It provides medical education at undergraduate and postgraduate levels.

Makerere University Medical school was founded in 1924 and is located on Mulago Hill in north-east Kampala, Uganda’s capital and largest city. It provides medical education at diploma, undergraduate, and postgraduate levels.

Mbarara University of Science and Technology was founded in 1989 and is located on the premises of Mbarara Regional Referral Hospital, in the city of Mbarara, Western Uganda.

Gulu university medical school was founded in 2004 and is located Gulu city, the largest urban center in Northern Uganda, approximately 345 kilometers (214 mi), by road, north of Kampala, Uganda’s capital and largest city. It provides medical education at diploma, undergraduate and postgraduate levels.

Study population

We enrolled undergraduate medical students pursuing Bachelor of Medicine and Bachelor of Surgery at the selected universities. Undergraduate medical students not available at the time of study were excluded.

Sample size estimation and sampling technique

This was calculated by Open-Epi calculator (http://www.openepi.com). We assumed a 50% prevalence of Open AI tools use, precision of 5%, and design effect of 1.5. This gave us a total sample size of 639 after assuming a non-response rate of 10%. Of these, we were able to reach out to 564 participants, 75 were unreachable. Stratified random sampling was used to recruit the eligible participants. Random sampling was done in each stratum based upon the percentage that each subgroup represented in the population. A list of all the students at each faculty of the selected universities was obtained, and grouped as per the year of study. Random numbers were assigned to the names, they were then filled in a random selector program, that generated a random list of the numbers to which the names were attached. The selected students were approached and given information about the study. Those that consented to participate in the study were recruited and the link to the questionnaire was sent to them.

Study variables

Our outcome variable was use of AI tools such as ChatGPT among medical students. Participants were asked if they have ever used AI tools such as ChatGPT, this was recorded as Yes (denoted as 1) or No (denoted as 0). The independent variables included socio-demographic characteristics such as age, religion, marital status, sex, university, year of study and awareness about AI tools.

Data collection tool and procedures

We used a semi structured self-administered questionnaire developed based on literature [17] to collect data. The questionnaire included questions on participant’s socio-demographics (age, sex, religion, course of study and year of study) and use of AI tools. Questions under use of AI tools assessed awareness about AI tools, actual use of AI tools, which AI tools are being used and what these AI tools are being used for. Data were collected by twenty trained research assistants who were medical students in the selected universities (five students per university). Eligible participants were approached by the research assistant, informed about the study and those who consented were sent to a link to the questionnaire via their email or WhatsApp number. The questionnaire was developed electronically in KoBo Toolbox which is an open-source software developed by the Harvard Humanitarian Initiative with support from United Nations agencies, CISCO, and partners to support data management by researchers and humanitarian organizations (https://www.kobotoolbox.org/). The servers are secure and encrypted with strong safe guards and protection against data loss. participants’ socio-demographics including.

Data analysis

Data were cleaned and analyzed in Stata version 17.0 (StataCorp; College Station, TX, USA). We summarized categorical data using frequencies and percentages, and numerical data as mean (standard deviation) and median (interquartile range) as appropriate. Bar graphs were used for data visualization. We conducted a generalized linear regression for the Poisson family with a log link to estimate prevalence ratios between use of AI tools and various exposures. We used robust cluster variance estimation to adjust for clustering at the University level. Factors with a p-value less than 0.05 at multivariable regression analysis were considered significant.

Ethical considerations

Ethics approval to conduct the study was granted by the Busitema University Research and Ethics committee (BUFHS-2023-79). Written informed consent was obtained from all the participants before recruitment into the study. Participants were informed about the study objectives, risks, and benefits before obtaining their consent and they received an internet data refund after taking part in the study. There was no coercion for participation in the study, and participants were free to withdraw from the study at any time. Identifiable information such as participant names weren’t collected, a unique code was assigned to each participant to ensure anonymity. The data collected were kept in a secure computer which was accessible only to the investigators.

Results

Participant characteristics (Table 1)

Table 1. Characteristics of the participants.

Characteristic (n = 564) Frequency (%)
Age in years
1. 18–25 406 (72.0%)
2. 26–34 131 (23.2%)
3. 35–46 27 (4.8%)
Sex
1. Female 169 (30.0%)
2. Male 395 (70.0%)
Religion
1. Anglican/Protestant 172 (30.5%)
2. Pentecostal/Born again 182 (32.3%)
3. Catholic 182 (32.3%)
4. Muslim 47 (8.3%)
5. Seventh Day Adventist 34 (6.0%)
6. Others* 10 (1.8%)
University
1. Gulu University 79 (14.0%)
2. Busitema University 161 (28.5%)
3. Makerere University 134 (23.8%)
4. Mbarara University of Science and Technology 190 (33.7%)
Current year of study
1. Year 1 66 (11.7%)
2. Year 2 74 (13.1%)
3. Year 3 167 (29.6%)
4. Year 4 149 (26.4%)
5. Year 5 108 (19.1%)
Marital status
1. Single 461 (81.7%)
2. Married 103 (18.3%)

*Jehovah’s witness, atheist, traditionalist

A total of 564 medical students participated in this study. Majority of the respondents (72.0%; 406/564) were aged between 18 and 25 years. The median age (IQR) was 23.0 (22.0–26.5). More than two-thirds (70.0%; 395/564) were male and the majority (81.7%; 461/564) were single. More than a quarter (30.5%; 172/564) were Anglicans, and (32.3%; 182/564) were Catholics. A third of the students, (33.3%; 190/564) were from Mbarara University of Science and Technology, (28.5%; 161/564) Busitema University, (23.8%; 134/564) Makerere University and (14.0%; 79/564) Gulu University.

Use of ChatGPT and other AI tools among medical students

Almost all 93% (522/564) had heard about AI tools. Most heard about AI tools from friends (71.0%), 41.6% (217/522) social media, 5.8% (30/522) lecturer and 2% (11/522) cited other sources which included; parents, and religious leader. A total of 427 out of 564 participants (75.7%) had ever used AI tools. Majority 72.2% had used ChatGPT, 14.9% Snapchat AI, 11.5% Bing AI and 6.9% Bard AI. Other AI tools are in Fig 1.

Fig 1.

Fig 1

Regarding areas of applications, students used AI tools for both academic and non-academic purposes. For academic purposes, most students reported that they use AI tools to complete assignments (55.5%), preparing for tutorials (39.9%), preparing for exams (34.8%) and research writing (24.8%). Other academic uses are in Fig 2.

Fig 2.

Fig 2

Non-academic uses reported by the students include; AI tools as a personal assistant (9.2%), emotional support (7.3%), recreation (6.2%) and counselling (5.5%). Other non-academic uses are shown in Fig 3.

Fig 3.

Fig 3

Factors associated with use of ChatGPT and other AI tools among medical students in Uganda (Table 2)

Table 2. Factors associated with use of ChatGPT and other AI tools among medical students in Uganda.

Variable cPR [95% CI] P-value  aPR [95% CI] P-value
Age Category        
18–25 1 1
26–34 0.79 [0.66, 0.95] 0.012 0.87 [0.67, 1.12] 0.280
35–46 0.64 [0.56, 0.73] <0.001 0.69 [0.62, 0.76] <0.001
Sex
Female 1 1
Male 0.96 [0.80, 1.14] 0.614 1.01 [0.93, 1.08] 0.868
Religion
Anglican/Protestant 1 1
Pentecostal/Born again 0.95 [0.88, 1.03] 0.217 0.99 [0.91, 1.08] 0.870
Catholic 0.94 [0.82, 1.08] 0.371 0.95 [0.85, 1.05] 0.285
Muslim 0.87 [0.62, 1.21] 0.397 0.85 [0.61, 1.19] 0.344
Seventh Day Adventist 1.05 [0.93, 1.18] 0.437 1.02 [0.93, 1.12] 0.647
Others 1.15 [1.04, 1.27] 0.009 1.13 [1.02, 1.26] 0.016
University
Gulu University 1 1
Busitema University 1.22 [1.22, 1.22] <0.001 1.31 [1.21, 1.41] <0.001
Makerere University 1.70 [1.70, 1.70] <0.001 1.66 [1.64, 1.69] <0.001
Mbarara University of Science and Technology 1.58 [1.58, 1.58] <0.001 1.58 [1.54, 1.62] <0.001
Year of Study
Year 1 1 1
Year 2 0.71 [0.54, 0.94] 0.017 0.82 [0.72, 0.94] 0.005
Year 3 0.80 [0.71, 0.89] <0.001 0.95 [0.83, 1.09] 0.467
Year 4 0.83 [0.59, 1.16] 0.276 1.03 [0.91, 1.17] 0.665
Year 5 0.88 [0.71, 1.08] 0.214 0.96 [0.88, 1.04] 0.339

Medical students aged 35 to 46 years were 31% less likely to use AI tools as compared to those aged less than 35 years (aPR: 0.69; 95% CI: [0.62, 0.76]).

Students in Makerere University were 66% more likely to use AI tools compared to students in Gulu University (aPR: 1.66; 95% CI: [1.64, 1.69]).

Year two medical students were 18% less likely to use AI tools as compared to those in year one (aPR: 0.82; 95% CI: [0.72, 0.94]).

Discussion

In this study, we assessed the use of ChatGPT and other AI tools among medical students in Uganda. The use of ChatGPT and other AI tools among medical students was widespread (76%). This could be due to increased awareness about AI tools among students as shown in our findings; almost all students (93%) had ever heard about AI tools. In addition, the increased familiarity among students regarding the capabilities and benefits of AI tools in medical education could have contributed to the widespread use of AI tools. Comparable to our findings, a study in Germany found a 63.4% use of AI tools among students [21]. However, a study by Weidener et al., revealed a relatively lower AI use of 38.8% among medical students in Germany, Austria, and Switzerland [22]. Furthermore, a study in Jordan among health students showed that only 11.3% had ever used AI tools such as ChatGPT [17]. Another study in Jordan and the West Bank of Palestine among medicine and pharmacy students also showed that despite most students being aware of AI tools, less than half had used them in their education [23]. The discrepancy across studies could be explained by the differences in study periods and characteristics of the study populations. For instance, the study in Jordan was done between February and March 2023 [17], probably AI tools had not gained popularity among students during that time as compared to our study done between November and December 2023, where AI tools had gained much popularity amongst the students. Additionally, some AI tools weren’t accessible before mid-2023, for instance, Bing Chat was not broadly accessible until May 2023 [24].

Our findings showed that ChatGPT was the most commonly (72%) used AI tool among medical students. Similarly, studies done in Germany and Nigeria revealed that the majority of the students indicated ChatGPT as the tool they use [21, 25]. This could be attributed to the fact that ChatGPT is the first generative AI tool readily available to the public and the fact that it is easy to use [12]. Furthermore, ChatGPT’s performance in answering various medical exams has been evaluated and it has demonstrated the potential to pass most of these exams [5, 6, 26]. Other AI tools used by more than 5% of the students included Snapchat AI, Bing AI, and Bard AI. Most of these AI tools became freely available to the public in mid-2023, probably they hadn’t gained much popularity among students as compared to ChatGPT which was made freely accessible to the public on November 30, 2022 [24]. However, studies conducted in Germany and India also revealed the use of Bing and Bard AI tools among medical students [17, 21].

Our study revealed that students used AI tools to complete assignments, to prepare for tutorials, and to prepare for exams. Consistent with our findings, several studies have revealed that students use AI tools to complete assignments and possibly cheat in exams [16, 21, 27]. AI tools such as ChatGPT can generate human-like texts to user questions and thus can be used by the students to complete written assignments and online examinations [8]. Students also reported the use of AI tools for research writing, summarizing information, translation, transcription, creating power point presentations, and clinical decision making. Similar findings were revealed in a study by Biri et al., among medical students in India [17]. This indicates a growing recognition of AI tools as applications that can bridge gaps in understanding and enhancing medical students’ learning experience [17].

We also found that students use AI tools for non-academic purposes including; personal assistants, emotional support, counseling, spiritual growth, recreation, companionship, and learning new skills. Further studies need to explore the role of AI tools in the mental and social health of medical students.

Medical students aged greater than 35 years were 31% less likely to use AI tools as compared to those aged less than 35 years. In Uganda, there are undergraduate medical students who are aged more than 35 years. These majorly include those who study medical course at diploma level and later join medical school for a medical undergraduate bachelor degree. Older students may be discouraged from using AI due to fixed beliefs that they are not efficient with technology [2830]. Furthermore, creators do not often consider the older generation when designing novel technology, thus biased perspectives on AI use emerge. Additionally, older adults are slower to adopt new technology compared to younger adults [2830].

Our findings also revealed that students at Makerere University were 66% more likely to use AI tools compared to students at Gulu University. Students at Makerere University could have a higher exposure to AI as compared to those at Gulu University. Makerere University has high student numbers as compared to Gulu University and is located in the capital city of Uganda, an area more urban than Gulu city where Gulu University is located. Urban areas promote technology awareness and use. In our study, year two medical students were 18% less likely to use AI tools than those in year one. Students in year one could be more exposed to AI as compared to those in year two. Further, year one students joined the university in August 2023, when AI tools such as ChatGPT had just become famous at our universities and thus are more likely to use AI than year two students who were already used to the conventional methods of learning.

Findings from this study do contribute to the emerging debates around the use of AI tools by medical students and have key implications for learning. Educators should assume students are using AI and adjust their way of teaching and setting of exams to suit this new reality. Further, educators and institutions need to develop AI policies and guidelines to ensure that future medical professionals are adequately prepared to navigate the challenges and opportunities presented by AI in medical education. Universities in high-income countries have come up with guidelines including Guidelines for the Use of Artificial Intelligence in University Courses by Juan David Gutiérrez at Universidad del Rosario, Initial guidelines for the use of Generative AI tools at Harvard by Havard University, A Guide for Students: Should I use AI? by Ulster University and Student guidance on using Generative Artificial Intelligence tools ethically for study by the University of Birmingham [19, 20, 31, 32]. The guidelines focus on the need to promote AI literacy, the need to cite AI sources, and the limitations of AI [19, 20, 33]. The guidelines also emphasize informed, transparent, ethical and responsible use of high-risk AI (generative AI such as Chat-GPT and stable diffusion AI such as DALL-E 2) in education [31]. Informed use requires that prior to using the tool, the student should research who or what company developed the tool, how it was developed, how it works, what functions it can perform, and what limitations and/or risks it presents [31]. Transparent use entails indicating in detail which tool the student used and how he/she used it [31]. Ethical use includes ensuring that one must distinguish what was written or produced directly by the student and what was generated by an AI tool [31]. Responsible use emphasizes that the use of these AI tools should be limited to early stages of the student’s work, to inspire or suggest directions, not to produce content that will later be included in his/her deliverables [31]. Guidance has also included examples of what AI can do well and what AI cannot do well [20]. Universities in low-income countries could borrow upon these guidelines and contextualize them. The guidelines would define the boundaries in which AI should be used in education. A key difference between high-income countries is the fact that most students in low-income countries use unpaid versions that are not up to date [34]. As such, guidelines in low-income countries should contextualize AI use [35].

Strengths and limitations

To the best of our knowledge, this is one of the few studies done to assess the use of AI tools such as ChatGPT among medical students in low-resource settings. In addition, we included four universities, one from each region of the country thus our findings may be generalizable to all medical students in the country. Although our findings provide valuable insights into the state of AI use in medical education in Uganda, broader multinational studies would offer a more comprehensive understanding. One key limitation of our study is the reliance on self-reported data from medical students, which might be subject to social desirability bias. To mitigate this, we used a self-administered questionnaire for data collection, the questionnaires were anonymous and the students were given identification numbers. As such the social pressure while answering questions was reduced.

Conclusion

The use of ChatGPT and other AI tools was widespread among medical students in Uganda. AI tools were used for both academic and non-academic purposes. Younger students were more likely to use AI tools compared to older students. There is a need for AI training programs in institutions to empower older students with essential skills for the digital age. Further, our research adds more evidence to existing voices calling for regulatory frameworks of AI in medical education to ensure that future medical professionals are adequately prepared to navigate the challenges and opportunities presented by AI in medical education.

Supporting information

S1 Dataset. Anonymised data set.

(XLSX)

pone.0313776.s001.xlsx (379.4KB, xlsx)

Acknowledgments

The authors would like to thank the research assistants, study participants, university staff, and members of the HEPI consortium.

Data Availability

The datasets used have been uploaded as supplementary information.

Funding Statement

This study was conducted with support from Fogarty International Center of the National Institutes of Health, U.S. Department of State’s Office of the U.S. Global AIDS Coordinator and Health Diplomacy (S/GAC), and President’s Emergency Plan for AIDS Relief (PEPFAR) under Award Number 1R25TW011213. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.ChatGPT: Optimizing language models for dialogue [Available from: https://openai.com/blog/chatgpt/]
  • 2.Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, et al. : How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR medical education 2023, 9:e45312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Benuyenah V: Commentary: ChatGPT use in higher education assessment: Prospects and epistemic threats. Journal of Research in Innovative Teaching & Learning 2023, 16(1):134–135. [Google Scholar]
  • 4.Eysenbach G: The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR medical education 2023, 9(1):e46885. doi: 10.2196/46885 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Das D, Kumar N, Longjam LA, Sinha R, Roy AD, Mondal H, et al. : Assessing the capability of ChatGPT in answering first-and second-order knowledge questions on microbiology as per competency-based medical education curriculum. Cureus 2023, 15(3). doi: 10.7759/cureus.36034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, et al. : Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS digital health 2023, 2(2):e0000198. doi: 10.1371/journal.pdig.0000198 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Khan RA, Jawaid M, Khan AR, Sajjad M: ChatGPT-Reshaping medical education and clinical management. Pakistan Journal of Medical Sciences 2023, 39(2):605. doi: 10.12669/pjms.39.2.7653 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kasneci E, Seßler K, Küchemann S, Bannert M, Dementieva D, Fischer F, et al. : ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences 2023, 103:102274. [Google Scholar]
  • 9.Sallam M: ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. In: Healthcare: 2023: MDPI; 2023: 887. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Adiguzel T, Kaya MH, Cansu FK: Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology 2023, 15(3):ep429. [Google Scholar]
  • 11.Rahman MM, Watanobe Y: Chatgpt for education and research: Opportunities, threats, and strategies. Applied Sciences 2023, 13(9):5783. [Google Scholar]
  • 12.Sok S, Heng K: ChatGPT for education and research: A review of benefits and risks. Available at SSRN 4378735 2023. [Google Scholar]
  • 13.Sallam M, Salim N, Barakat M, Al-Tammemi A: ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J 2023, 3(1):e103–e103. doi: 10.52225/narra.v3i1.103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Liu Y, Han T, Ma S, Zhang J, Yang Y, Tian J, et al. : Summary of chatgpt/gpt-4 research and perspective towards the future of large language models. arXiv preprint arXiv:230401852 2023. [Google Scholar]
  • 15.Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, et al. : “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management 2023, 71:102642. [Google Scholar]
  • 16.Jeyaraman M, SP K, Jeyaraman N, Nallakumarasamy A, Yadav S, Bondili SK: ChatGPT in Medical Education and Research: A Boon or a Bane? Cureus 2023, 15(8):e44316. doi: 10.7759/cureus.44316 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Biri SK, Kumar S, Panigrahi M, Mondal S, Behera JK, Mondal H: Assessing the Utilization of Large Language Models in Medical Education: Insights From Undergraduate Medical Students. Cureus 2023, 15(10):e47468. doi: 10.7759/cureus.47468 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Vagha S, Mishra V, Joshi Y: Reviving medical education through teachers training programs: A literature review. Journal of education and health promotion 2023, 12:277. doi: 10.4103/jehp.jehp_1413_22 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.A Guide for Students: Should I use AI? [https://www.ulster.ac.uk/learningenhancement/cqe/strategies/ai/guidance-for-students]
  • 20.Initial guidelines for the use of Generative AI tools at Harvard [https://huit.harvard.edu/ai/guidelines]
  • 21.von Garrel J, Mayer J: Artificial Intelligence in studies—use of ChatGPT and AI-based tools among students in Germany. Humanities and Social Sciences Communications 2023, 10(1):799. [Google Scholar]
  • 22.Weidener L, Fischer M: Artificial Intelligence in Medicine: Cross-Sectional Study Among Medical Students on Application, Education, and Ethical Aspects. JMIR medical education 2024, 10:e51247. doi: 10.2196/51247 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Mosleh R, Jarrar Q, Jarrar Y, Tazkarji M, Hawash M: Medicine and Pharmacy Students’ Knowledge, Attitudes, and Practice regarding Artificial Intelligence Programs: Jordan and West Bank of Palestine. Advances in medical education and practice 2023, 14:1391–1400. doi: 10.2147/AMEP.S433255 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Mehdi Y: Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web. Official Microsoft Blog 2023, 7. [Google Scholar]
  • 25.Oluwadiya KS, Adeoti AO, Agodirin SO, Nottidge TE, Usman MI, Gali MB, et al. : Exploring Artificial Intelligence in the Nigerian Medical Educational Space: An Online Cross-sectional Study of Perceptions, Risks and Benefits among Students and Lecturers from Ten Universities. Nigerian Postgraduate Medical Journal 2023, 30(4):285–292. doi: 10.4103/npmj.npmj_186_23 [DOI] [PubMed] [Google Scholar]
  • 26.Roos J, Kasapovic A, Jansen T, Kaczmarczyk R: Artificial Intelligence in Medical Education: Comparative Analysis of ChatGPT, Bing, and Medical Students in Germany. JMIR medical education 2023, 9:e46482. doi: 10.2196/46482 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Arif TB, Munaf U, Ul-Haque I: The future of medical education and research: Is ChatGPT a blessing or blight in disguise? In., vol. 28: Taylor & Francis; 2023: 2181052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Mariano J, Marques S, Ramos MR, Gerardo F, Cunha CLd, Girenko A, et al. : Too old for technology? Stereotype threat and technology use by older adults. Behaviour & Information Technology 2022, 41(7):1503–1514. [Google Scholar]
  • 29.Zhang M: Older people’s attitudes towards emerging technologies: A systematic literature review. Public Understanding of Science 2023, 32(8):948–968. doi: 10.1177/09636625231171677 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Yap Y-Y, Tan S-H, Choon S-W: Elderly’s intention to use technologies: A systematic literature review. Heliyon 2022, 8(1):e08765. doi: 10.1016/j.heliyon.2022.e08765 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Guidelines for the Use of Artificial Intelligence in University Courses [https://juangutierrez.co/wp-content/uploads/2023/08/guidelines-for-the-use-of-artificial-intelligence-in-university-contexts-v5.0.pdf]
  • 32.Student guidance on using Generative Artificial Intelligence tools ethically for study [https://intranet.birmingham.ac.uk/student/libraries/asc/student-guidance-gai.aspx]
  • 33.Artificial Intelligence Policies and Guidelines [https://alresources.nd.edu/resources/artificial-intelligence-policies-guidelines/]
  • 34.Kitsara I: Artificial intelligence and the digital divide: From an innovation perspective. In: Platforms and Artificial Intelligence: The Next Generation of Competences. edn.: Springer; 2022: 245–265. [Google Scholar]
  • 35.Arakpogun EO, Elsahn Z, Olan F, Elsahn F: Artificial intelligence in Africa: Challenges and opportunities. The fourth industrial revolution: Implementation of artificial intelligence for growing business success 2021:375–388. [Google Scholar]

Decision Letter 0

Timothy Omara

24 Mar 2024

PONE-D-24-01790Widespread use of ChatGPT and other Artificial Intelligence tools among medical students in Uganda: a cross-sectional studyPLOS ONE

Dear Dr. Nantale,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

ACADEMIC EDITOR:In addition to the reviewers' comments,1. Data visualization could be improved. Could we present some of the figures using charts other than bar graphs?. For bar charts, please indicate the axis titles.2. Please expand the DISCUSSION. There are new reports on this aspect in other countries, which may make your discussion robust, and probably lead to more meaningful conclusions. Weidener L, Fischer M. Artificial Intelligence in Medicine: Cross-Sectional Study Among Medical Students on Application, Education, and Ethical Aspects. JMIR Med Educ 2024;10:e51247https://mededu.jmir.org/2024/1/e51247Kapsali M, Livanis E, Tsalikidis C, Oikonomou P, Voultsos P, Tsaroucha A. Ethical Concerns About ChatGPT in Healthcare: A Useful Tool or the Tombstone of Original and Reflective Thinking?. Cureus 2024, 16(2): e54759. doi:10.7759/cureus.54759Heredia-Negrón, F.; Tosado-Rodríguez, E.L.; Meléndez-Berrios, J.; Nieves, B.; Amaya-Ardila, C.P.; Roche-Lima, A. Assessing the Impact of AI Education on Hispanic Healthcare Professionals’ Perceptions and Knowledge. Educ. Sci. 2024, 14, 339. https://doi.org/10.3390/educsci14040339McLennan S, Meyer A, Schreyer K, Buyx A (2022) German medical students´ views regarding artificial intelligence in medicine: A cross-sectional survey. PLOS Digit Health 1(10): e0000114. https://doi.org/10.1371/journal.pdig.00001143. It is necessary to indicate the potential limitations of the study, just as you indicated the strengths.  4. Your conclusions should be expanded in the broader context, and should preferably give some directions for future reseach.

Please submit your revised manuscript by May 08 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Timothy Omara

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript.

3. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 1 and 2 in your text; if accepted, production will need this reference to link the reader to the Table.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I am pleased to give my thoughts on the article titled 'Widespread Use of ChatGPT and Other Artificial Intelligence Tools Among Medical Students in Uganda: A Cross-Sectional Study.' Overall, it appears to be a promising article of good quality. However, it can be improved.

Title: The title could be improved removing redundant words by specifying the type of AI tools examined in the study. For example, "Widespread use of ChatGPT and Other Artificial Intelligence Tools among Medical Students in Uganda: A Cross-Sectional Study."

Abstract: Consider including specific findings or percentages in the abstract to give readers a clearer understanding of the key results. Also, mentioning the implications of the findings in the abstract would enhance its significance.

Introduction: The introduction effectively sets the context by explaining the significance of AI tools like ChatGPT in medical education. However, it could be strengthened by providing more context on the challenges faced by medical education due to the widespread adoption of AI tools. Consider integrating more recent literature or studies to emphasize the relevance and timeliness of the research.

Materials and Methods: Provide more details on the semi-structured questionnaire used, including the specific questions related to AI tool usage. Describe the process of questionnaire adaptation and validation, if applicable, to ensure the reliability of data collection instruments.

Mention the steps taken to ensure the confidentiality and anonymity of participants' responses.

Results: The results section presents the key findings of the study in a structured manner.

Discussion: The discussion effectively contextualizes the findings within existing literature and highlights their implications. However, the authors should provide a more detailed comparison of the study findings with similar research conducted in other settings or populations.

While the authors discuss the strength of the, they should consider discussing potential limitations of the study and their impact on the interpretation of results. Also, propose recommendations for future research or practical interventions based on the study findings.

Conclusion: Consider reiterating the implications of the study findings for medical education policy and practice. Provide clear recommendations for addressing the challenges and opportunities associated with the widespread use of AI tools among medical students. Avoid introducing new information or findings in the conclusion section.

Ethical Approval and Consent to Participate: Consider including information on how participants were informed about the study objectives, risks, and benefits before obtaining their consent. Specify whether any incentives or compensation were provided to participants for their involvement in the study.

General Comments: The article would benefit from thorough proofreading to correct grammatical errors, typos, and formatting inconsistencies.

Reviewer #2: 1. “In order to exploit the use of ChatGPT and other related AI software, there is an urgent need to develop guidelines.” – do the authors mean to say, in order to prevent exploitation?

2. What were the measures taken to reduce social desirability bias in the study? Will students actually acknowledge its use for plagiarism?

3. Are there any guidelines in developed countries / HMICs on how to use AI tools?

4. How can guidelines be formed to limit its misuse? – Will universities / journals check for AI written content?

5. “Almost all 93% (522/564) had ever had about AI tools 155 such as ChatGPT.” – Needs to be rephrased

6. “Most 71%” – most and percentage should be avoided together

7. Results need to be rewritten in a better language

8. “Medical students aged 35 to 46 years were 31% less likely to use AI tools as compared to those aged less than 35 years” – Are undergraduate medical students aged above 35, this is highly unusual across the world? Also can 13 really be compared against 47 and 77 to draw such a definite conclusion?

9. While the study provides insights on the patterns on Generative AI use, and the need for regulatory guidelines – Can the authors discuss steps taken in other countries to reduce its misuse.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Dr. Amir Kabunga

Reviewer #2: Yes: Rajmohan Seetharaman

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: Reviewer comments.pdf

pone.0313776.s002.pdf (85.5KB, pdf)
PLoS One. 2025 Jan 9;20(1):e0313776. doi: 10.1371/journal.pone.0313776.r002

Author response to Decision Letter 0


14 Aug 2024

Reviewer’s comment Response to comment Line number

Academic Editor

1. Data visualization could be improved. Could we present some of the figures using charts other than bar graphs?. For bar charts, please indicate the axis titles. Thank you for this comment. We have explored the use of pie-charts, donut charts, lollipop charts and upset plots without much success. The strength of the current chart is that it shows both absolute numbers on the x-axis and the corresponding percentage in brackets. We kindly request that you suggest the better chart to use. N/A

2. Please expand the DISCUSSION. There are new reports on this aspect in other countries, which may make your discussion robust, and probably lead to more meaningful conclusions. Thank you, we have revised the discussion basing on findings from new articles on Artificial Intelligence in medical education.

3. It is necessary to indicate the potential limitations of the study, just as you indicated the strengths. Thank you, we have described the limitations of our study.

“One key limitation of our study is the reliance on self-reported data from medical students, which might be subject to social desirability bias. However, to reduce social desirability bias, we used a self-administered questionnaire for data collection” 280-282

4. Your conclusions should be expanded in the broader context, and should preferably give some directions for future research.

Thank you, this has been done. 284-290

Reviewer #1

Title: The title could be improved removing redundant words by specifying the type of AI tools examined in the study. For example, "Widespread use of ChatGPT and Other Artificial Intelligence Tools among Medical Students in Uganda: A Cross-Sectional Study." Thank you for your comment. We have adjusted our title accordingly. However, the other AI tools used are many and shall make the title very lengthy and have not included them. N/A

Abstract: Consider including specific findings or percentages in the abstract to give readers a clearer understanding of the key results. Also, mentioning the implications of the findings in the abstract would enhance its significance. Thank you, this has been done. 40-55

Introduction: The introduction effectively sets the context by explaining the significance of AI tools like ChatGPT in medical education. However, it could be strengthened by providing more context on the challenges faced by medical education due to the widespread adoption of AI tools. Consider integrating more recent literature or studies to emphasize the relevance and timeliness of the research. Thank you, this has been done. 69-88

Materials and Methods: Provide more details on the semi-structured questionnaire used, including the specific questions related to AI tool usage. Describe the process of questionnaire adaptation and validation, if applicable, to ensure the reliability of data collection instruments. Thank you, this has been done and we have also uploaded the questionnaire we used. 136-140

Mention the steps taken to ensure the confidentiality and anonymity of participants' responses. Thank you, this has been done 298-301

Discussion: The discussion effectively contextualizes the findings within existing literature and highlights their implications. However, the authors should provide a more detailed comparison of the study findings with similar research conducted in other settings or populations. We have revised the discussion accordingly. 206-271

While the authors discuss the strength of the, they should consider discussing potential limitations of the study and their impact on the interpretation of results. Also, propose recommendations for future research or practical interventions based on the study findings. Thank you, we have described the limitations of our study.

“One key limitation of our study is the reliance on self-reported data from medical students, which might be subject to social desirability bias. However, to reduce social desirability bias, we used a self-administered questionnaire for data collection” 278-281

Conclusion: Consider reiterating the implications of the study findings for medical education policy and practice. Provide clear recommendations for addressing the challenges and opportunities associated with the widespread use of AI tools among medical students. Avoid introducing new information or findings in the conclusion section. Thank you, we have revised accordingly.

283-289

Ethical Approval and Consent to Participate: Consider including information on how participants were informed about the study objectives, risks, and benefits before obtaining their consent. Specify whether any incentives or compensation were provided to participants for their involvement in the study. Thank you for the comment. This has been done.

293-300

General Comments: The article would benefit from thorough proofreading to correct grammatical errors, typos, and formatting inconsistencies. Thank you, we have done a proofreading and corrected the identified grammatical errors, and typos N/A

Reviewer #2

1. “In order to exploit the use of ChatGPT and other related AI software, there is an urgent need to develop guidelines.” – do the authors mean to say, in order to prevent exploitation? Thank you for your comment. With exploiting, we refer to making use of. We have rephrased it to “AI tools can address faculty shortages, support research, and innovation, and advance critical thinking skills. In order to fully exploit these benefits, there is an urgent need to develop institutional AI policies and guidelines, thereby harnessing the advantages while mitigating associated risks in medical education”

2. What were the measures taken to reduce social desirability bias in the study? Will students actually acknowledge its use for plagiarism? To reduce social desirability bias, we used a self-administered questionnaire. 136

3. Are there any guidelines in developed countries / HMICs on how to use AI tools?

Yes, there are guidelines in some high-income countries on AI use in education N/A

4. How can guidelines be formed to limit its misuse? – Will universities / journals check for AI written content? There are attempts to check for AI written content. However, most of these tools are not yet validated.

5. “Almost all 93% (522/564) had ever had about AI tools 155 such as ChatGPT.” – Needs to be rephrased Thank you, this has been rephrased. 162

6. “Most 71%” – most and percentage should be avoided together Thank you, this has been rephrased. 163

7. Results need to be rewritten in a better language Thank you, this has been done. 158-204

8. “Medical students aged 35 to 46 years were 31% less likely to use AI tools as compared to those aged less than 35 years” – Are undergraduate medical students aged above 35, this is highly unusual across the world? Also can 13 really be compared against 47 and 77 to draw such a definite conclusion? We understand your concern. However, in Uganda, there are undergraduate medical students who are aged more than 35 years. These majorly include those who study medical course at diploma level and later join medical school for a medical undergraduate bachelor degree. N/A

9. While the study provides insights on the patterns on Generative AI use, and the need for regulatory guidelines – Can the authors discuss steps taken in other countries to reduce its misuse. Thank you, we have discussed these as indicated. Other countries have developed AI policies and guidelines to prevent AI misuse. 274-280

Attachment

Submitted filename: Rebuttal letter AI_070424.docx

pone.0313776.s003.docx (22KB, docx)

Decision Letter 1

Timothy Omara

26 Aug 2024

PONE-D-24-01790R1Widespread use of ChatGPT and other Artificial Intelligence tools among medical students in Uganda: a cross-sectional studyPLOS ONE

Dear Dr. Nantale,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

 Please submit your revised manuscript by Oct 10 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Timothy Omara, PhD

Academic Editor

PLOS ONE

Additional Editor Comments:

Dear authors,

The reviewers have re-assessed your resubmission. However, there are still suggestions that need to be incorporated. In addition to reviewer comments, I suggest taking a closer look at the following aspects of the draft.

1. Table 1 seems not to be properly reported. I would expect that it gives a general overview of the overall sociodemographic characteristics of the respondents (irrespective of whether or not they have ever used ChatGPT or any other such LLM). Please refer to my suggestions in the manuscript file attached.

2. It is also evident that percentage calculations in Figures 1, 2 and 3 are erroneous. For example, the total percentage in Figure 1 is 118% instead of 100%. Figure 3 has total percentage less than 100%. It would be expected to group academic and non-academic uses of these LLM together, because this is the collective sum of what students attested to using them for.

3. After these revisions, you may have to redo statistical analysis and change some parts of the abstract, results, discussion and conclusions.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: No

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: Thank you for the opportunity to re-review this article. Unfortunately, many of the comments previously provided to the authors have not been adequately addressed. The authors should carefully reconsider each of my comments and provide more thorough responses, as the current revisions appear to be only superficial attempts to resolve the issues with the manuscript.

1. “In order to exploit the use of ChatGPT and other related AI software, there is an urgent need to develop guidelines.” – do the authors mean to say, in order to prevent exploitation?Thank you for your comment. With exploiting, we refer to making use of. We have rephrased it to “AI tools can address faculty shortages, support research, and innovation, and advance critical thinking skills. In order to fully exploit these benefits, there is an urgent need to develop institutional AI policies and guidelines, thereby harnessing the advantages while mitigating associated risks in medical education”

Reviewers comment on the response: Please replace “exploit” word with a synonym.

2. What were the measures taken to reduce social desirability bias in the study? Will students actually acknowledge its use for plagiarism?To reduce social desirability bias, we used a self-administered questionnaire.136

Reviewers comment on the response: Please explain how can self-administered questionnaire reduce social desirability bias? E.g. “Anonymity, reduced social pressure, privacy, lack of non-verbal cues, and standardized responses in self-administered questionnaires help minimize social desirability bias, leading to more accurate and reliable data collection.”

3. Are there any guidelines in developed countries / HMICs on how to use AI tools? Yes, there are guidelines in some high-income countries on AI use in educationN/A

Reviewers comment on the response: If yes then this should be mentioned and cited in the manuscript including name of the guidelines.

4. How can guidelines be formed to limit its misuse? – Will universities / journals check for AI written content?There are attempts to check for AI written content. However, most of these tools are not yet validated.

Reviewers comment on the response: Give examples and cite in the manuscript.

5. “Almost all 93% (522/564) had ever had about AI tools 155 such as ChatGPT.” – Needs to be rephrasedThank you, this has been rephrased.162

Reviewers comment on the response: Rephrasing language not up to the mark.

6. “Most 71%” – most and percentage should be avoided togetherThank you, this has been rephrased.163

Reviewers comment on the response: This is still there in the abstract “most (72.2%) had ever used ChatGPT”

7. Results need to be rewritten in a better languageThank you, this has been done.158-204

Reviewers comment on the response: This has been adequately addressed

8. “Medical students aged 35 to 46 years were 31% less likely to use AI tools as compared to those aged less than 35 years” – Are undergraduate medical students aged above 35, this is highly unusual across the world? Also can 13 really be compared against 47 and 77 to draw such a definite conclusion?We understand your concern. However, in Uganda, there are undergraduate medical students who are aged more than 35 years. These majorly include those who study medical course at diploma level and later join medical school for a medical undergraduate bachelor degree.N/A

Reviewers comment on the response: This needs to be explained in the discussion, to avoid conflicting remarks by readers.

9. While the study provides insights on the patterns on Generative AI use, and the need for regulatory guidelines – Can the authors discuss steps taken in other countries to reduce its misuse.Thank you, we have discussed these as indicated. Other countries have developed AI policies and guidelines to prevent AI misuse.274-280

Reviewers comment on the response: This discussion is very superficial. It needs a very indepth discussion – examples of universities / guideline names also need to be cited. The authors need to mention about the indepth policies.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Rajmohan Seetharaman

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: StudentAI_descriptive_130824_AE.docx

pone.0313776.s004.docx (243.3KB, docx)
PLoS One. 2025 Jan 9;20(1):e0313776. doi: 10.1371/journal.pone.0313776.r004

Author response to Decision Letter 1


7 Oct 2024

Thank you for reviewing our manuscript “PONE-D-24-01790R1; Widespread use of ChatGPT and other Artificial Intelligence tools among medical students in Uganda: a cross-sectional study”

Below is our point-by-point response to each comment.

Thank you!

Reviewer’s comment Response Line number

Editor

1. Table 1 seems not to be properly reported. I would expect that it gives a general overview of the overall sociodemographic characteristics of the respondents (irrespective of whether or not they have ever used ChatGPT or any other such LLM). Please refer to my suggestions in the manuscript file attached. Thank you for the suggestion. We have revised the table as suggested.

2. It is also evident that percentage calculations in Figures 1, 2 and 3 are erroneous. For example, the total percentage in Figure 1 is 118% instead of 100%. Figure 3 has total percentage less than 100%. It would be expected to group academic and non-academic uses of these LLM together, because this is the collective sum of what students attested to using them for. Thank you for your comment, for figure 1, 2 and 3, the percentages aren’t equal to 100 because questions on uses of LLM were multiple choice questions. One person could select more than one use.

3. After these revisions, you may have to redo statistical analysis and change some parts of the abstract, results, discussion and conclusions.

Reviewer #2

1. “In order to exploit the use of ChatGPT and other related AI software, there is an urgent need to develop guidelines.” – do the authors mean to say, in order to prevent exploitation? Thank you for your comment. With exploiting, we refer to making use of. We have rephrased it to “AI tools can address faculty shortages, support research, and innovation, and advance critical thinking skills. In order to fully exploit these benefits, there is an urgent need to develop institutional AI policies and guidelines, thereby harnessing the advantages while mitigating associated risks in medical education”

Reviewers comment on the response: Please replace “exploit” word with a synonym. We have replaced exploit with a synonym; utilize 85

2. What were the measures taken to reduce social desirability bias in the study? Will students actually acknowledge its use for plagiarism? To reduce social desirability bias, we used a self-administered questionnaire.136

Reviewers comment on the response: Please explain how can self-administered questionnaire reduce social desirability bias? E.g. “Anonymity, reduced social pressure, privacy, lack of non-verbal cues, and standardized responses in self-administered questionnaires help minimize social desirability bias, leading to more accurate and reliable data collection.” Thank you, we used a self-administered questionnaire for data collection, the questionnaires were anonymous and the students were given identification numbers as such the social pressure while reducing questions was reduced.

3. Are there any guidelines in developed countries / HMICs on how to use AI tools? Yes, there are guidelines in some high-income countries on AI use in education N/A

Reviewers comment on the response: If yes then this should be mentioned and cited in the manuscript including name of the guidelines. Yes, this has been updated.

“Universities in high-income countries have come up with guidelines including Guidelines for the Use of Artificial Intelligence in University Courses by Juan David Gutiérrez at Universidad del Rosario, Initial guidelines for the use of Generative AI tools at Harvard by Havard University, A Guide for Students: Should I use AI? by Ulster University and Student guidance on using Generative Artificial Intelligence tools ethically for study by the University of Birmingham” 280-284

4. How can guidelines be formed to limit its misuse? – Will universities / journals check for AI written content? There are attempts to check for AI written content. However, most of these tools are not yet validated.

Reviewers comment on the response: Give examples and cite in the manuscript. This has been done.

The guidelines focus on the need to promote AI literacy, the need to cite AI sources, and the limitations of AI. The guidelines also emphasize informed, transparent, ethical and responsible use of high-risk AI (generative AI such as Chat-GPT and stable diffusion AI such as DALL-E 2) in education. 284-296

5. “Almost all 93% (522/564) had ever had about AI tools 155 such as ChatGPT.” – Needs to be rephrased Thank you, this has been rephrased.162

Reviewers comment on the response: Rephrasing language not up to the mark. Thank you for the observation, this has been rephrased and now reads.

Almost all 93% (522/564) had ever heard about AI tools 155 such as ChatGPT 172

6. “Most 71%” – most and percentage should be avoided together. Thank you, this has been rephrased.163

Reviewers comment on the response: This is still there in the abstract “most (72.2%) had ever used ChatGPT” This has been revised as suggested. 41

7. Results need to be rewritten in a better language Thank you, this has been done.158-204

Reviewers comment on the response: This has been adequately addressed Thank you.

8. “Medical students aged 35 to 46 years were 31% less likely to use AI tools as compared to those aged less than 35 years” – Are undergraduate medical students aged above 35, this is highly unusual across the world? Also can 13 really be compared against 47 and 77 to draw such a definite conclusion? We understand your concern. However, in Uganda, there are undergraduate medical students who are aged more than 35 years. These majorly include those who study medical course at diploma level and later join medical school for a medical undergraduate bachelor degree. N/A

Reviewers comment on the response: This needs to be explained in the discussion, to avoid conflicting remarks by readers. Thank you for the suggestion, we have explained this in the discussion.

“In Uganda, there are undergraduate medical students who are aged more than 35 years. These majorly include those who study medical course at diploma level and later join medical school for a medical undergraduate bachelor degree”

9. While the study provides insights on the patterns on Generative AI use, and the need for regulatory guidelines – Can the authors discuss steps taken in other countries to reduce its misuse.Thank you, we have discussed these as indicated. Other countries have developed AI policies and guidelines to prevent AI misuse.274-280

Reviewers comment on the response: This discussion is very superficial. It needs a very indepth discussion – examples of universities / guideline names also need to be cited. The authors need to mention about the indepth policies. We have enriched the discussion as suggested.

“Universities in high-income countries have come up with guidelines including Guidelines for the Use of Artificial Intelligence in University Courses by Juan David Gutiérrez at Universidad del Rosario, Initial guidelines for the use of Generative AI tools at Harvard by Havard University, A Guide for Students: Should I use AI? by Ulster University and Student guidance on using Generative Artificial Intelligence tools ethically for study by the University of Birmingham

The guidelines focus on the need to promote AI literacy, the need to cite AI sources, and the limitations of AI. The guidelines also emphasize informed, transparent, ethical and responsible use of high-risk AI (generative AI such as Chat-GPT and stable diffusion AI such as DALL-E 2) in education. Informed use requires that prior to using the tool, the student should research who or what company developed the tool, how it was developed, how it works, what functions it can perform, and what limitations and/or risks it presents. Transparent use includes indicating in detail which tool the student used and how he/she used it. Ethical use includes ensuring that one must distinguish what was written or produced directly by the student and what was generated by an AI tool. Responsible use emphasizes that the use of these AI tools should be limited to early stages of the student’s work, to inspire or suggest directions, not to produce content that will later be included in his/her deliverables. Guidance has also included examples of what AI can do well and what AI cannot do well.” 284-296

Decision Letter 2

Timothy Omara

31 Oct 2024

Widespread use of ChatGPT and other Artificial Intelligence tools among medical students in Uganda: a cross-sectional study

PONE-D-24-01790R2

Dear Dr. Nantale,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Timothy Omara

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: Thank you for addressing all requested revisions. The manuscript now includes a suitable synonym for "exploit," clarifies measures taken to reduce social desirability bias, cites specific AI guidelines from high-income countries, and provides examples of tools to detect AI content. Language improvements enhance clarity, and the discussion now explains age-related AI tool usage and international regulatory policies. These updates meet the review requirements, and I recommend the manuscript for acceptance.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Rajmohan Seetharaman

**********

Acceptance letter

Timothy Omara

5 Nov 2024

PONE-D-24-01790R2

PLOS ONE

Dear Dr. Nantale,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Timothy Omara

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Dataset. Anonymised data set.

    (XLSX)

    pone.0313776.s001.xlsx (379.4KB, xlsx)
    Attachment

    Submitted filename: Reviewer comments.pdf

    pone.0313776.s002.pdf (85.5KB, pdf)
    Attachment

    Submitted filename: Rebuttal letter AI_070424.docx

    pone.0313776.s003.docx (22KB, docx)
    Attachment

    Submitted filename: StudentAI_descriptive_130824_AE.docx

    pone.0313776.s004.docx (243.3KB, docx)

    Data Availability Statement

    The datasets used have been uploaded as supplementary information.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES