Skip to main content
BMC Medical Education logoLink to BMC Medical Education
. 2025 Feb 8;25:209. doi: 10.1186/s12909-025-06807-6

Exploring medical students’ intention to use of ChatGPT from a programming course: a grounded theory study in China

Chen Wang 1, Changqi Xiao 2, Xuejiao Zhang 2, Yingying Zhu 2, Xueqing Chen 2, Yilin Li 3, Huiying Qi 1,
PMCID: PMC11806607  PMID: 39923098

Abstract

Background

In interdisciplinary general education courses, medical students face the daunting challenge of learning programming due to academic pressure, cognitive biases, and differences in thinking patterns. ChatGPT provides an effective way for people to acquire knowledge, improve learning efficiency, and quality.

Objective

To explore whether medical students can be assisted in learning programming with the help of ChatGPT, it is necessary to investigate their experience and perception of using ChatGPT, and to study which factors influence their willingness to use ChatGPT.

Methods

Drawing on the grounded theory research paradigm, this paper constructs a research model of the influencing factors of ChatGPT usage willingness for medical students in programming courses through the analysis of interview data from 30 undergraduate medical students. It analyzes and discusses the cognition and influencing factors of medical students’ willingness to use ChatGPT in programming learning.

Results

The willingness to use ChatGPT in programming learning is divided into three types based on the students’ subjective degree of use: active use, neutral use, and negative use. It is also found that individual factors, technical factors, information factors, and environmental factors are four important dimensions affecting the willingness to use ChatGPT.

Conclusions

Based on the analysis of influencing factors, strategies and suggestions such as preventing risks and focusing on ethical education, cultivating critical thinking and establishing a case library, and personalized teaching to enhance core literacy in programming are proposed.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12909-025-06807-6.

Keywords: ChatGPT, Use intention, Medical students, Programming course, Grounded theory

Introduction

With the continuous improvement of information society and the application of new technologies such as artificial intelligence and big data in the medical field, modern medical research needs to be integrated with and supported by information technology and methods. In this context, China has put forward the requirements for the ability of medical students to use interdisciplinary knowledge to solve frontier problems in the medical field in the training program of new medical talents, to cultivate compound talents in the medical field in the future. To help medical students adapt to the new development of medicine in the follow-up clinical and scientific research work by using information technology knowledge and skills, medical colleges have set up interdisciplinary ability training courses for medical students. In some scenarios in the medical field, a basic understanding of information technology concepts and the use of existing tools (e.g. using Access for data storage and management and SPSS for data analysis.) is sufficient for medical students’ study and work. Still, it is crucial to learn computer programming in other more in-depth or specific medical research fields [1, 2]. Programming empowers medical students with greater flexibility and autonomy, helps them deeply understand medical information management, disease prediction model construction, and big data analysis, and enhances their ability of independence and innovation in clinical decision-making, research design, and result interpretation [35].

The university computer course offered by medical colleges in China is an interdisciplinary general course for first-year undergraduate students majoring in medicine. The course content focuses on learning basic computer knowledge and programming, opening the door for medical students to effectively communicate and integrate the two major disciplines of medicine and computer science for the first time. However, learning programming in interdisciplinary general courses remains a tough challenge for many medical students, primarily due to the following reasons:

  1. The heavy workload in medical colleges, which requires balancing medical professional courses and programming learning may increase learning pressure and time management difficulties, resulting in limited opportunities for medical students to get access to programming and have a deeper understanding.

  • (2)

    Some medical students lack the initiative and urgency of learning, thinking that programming learning has little relationship with their professional field.

  • (3)

    The logical and abstract thinking of programming learning is different from the empirical study of traditional medical education. Medical students need to change their mindset and ways to learn, understand, and master the concepts and knowledge related to programming, which makes many medical students feel challenged when learning programming.

With the rapid development of artificial intelligence technology, artificial intelligence empowering medical education reform has attracted the attention of more and more researchers. Especially ChatGPT released by OpenAI, as a product that marks a new stage in the development of artificial intelligence technology, its application in the field of medical education has become a hot topic for many researchers to explore. Research suggests that ChatGPT has the potential to assist learning, providing students with a wide range of knowledge and helping students better understand course content [6, 7]. ChatGPT can generate explanations of various issues based on the needs of the student [8, 9]. At the same time, ChatGPT’s interactive features can provide students with an interactive learning experience [10], increase their participation by prompting students to ask further questions [11], and provide real-time feedback, so that ChatGPT can modify the learning content for students according to their learning performance and learning preferences, and ensure a targeted and efficient learning experience [12].

While paying attention to the educational innovation brought by ChatGPT in the field of medical education, scholars also consider the risks and challenges faced when ChatGPT is applied in the field of medical education, so as to objectively evaluate the pros and cons of the application of ChatGPT in medical education. For example, students submitting articles generated by ChatGPT as their own assignments constitutes academic misconduct [13]. In the learning process, students who over-rely on ChatGPT may end up losing the ability to generate original ideas [8], learn and write, and may stifle critical thinking and creativity [14]. While ChatGPT can provide useful feedback and advice, it may not provide the same level of emotional support and guidance as a human teacher [15]. In addition, other learning models such as group learning allow for collaborative problem solving, peer-to-peer feedback, and the exchange of different points of view, a kind of interpersonal interactive discussion and exchange that ChatGPT cannot fully replicate [16]. Therefore, the application of ChatGPT in the educational process may lead to a decrease in interpersonal interaction [17].

The application opportunities and risks of ChatGPT in the field of medical education coexist, so exploring its user acceptance is crucial to ensure the effective application of the technology. In order to gain a deeper understanding of the general patterns of user acceptance of new technologies, researchers often use the TAM (Technology Acceptance Model) and the UTAUT (Unified Theory of Acceptance and Use of Technology) as theoretical guidance. TAM was proposed by Davis in 1989 [18]. This model holds that individuals’ willingness to adopt new technologies is mainly influenced by their perceived usefulness and perceived ease of use. TAM provides a theoretical framework for exploring the motivation and degree of acceptance of new technologies, and it has been widely applied and developed in subsequent studies [19, 20]. TAM contributes to elucidating public perceptions of the potential benefits and challenges of learning with a particular technology [21], suggesting that TAM has an important application in analyzing how ChatGPT can enhance the learning experience and make it more engaging and acceptable to potential learners. The UTAUT model was proposed by Venkatesh et al., in 2003 [22], and developed on the basis of TAM model. The UTAUT model not only contains the core concepts of TAM model, but also integrates more influencing factors and regulating variables, providing a more comprehensive and in-depth theoretical framework. It can more accurately predict users’ willingness to use new technologies and actual use behaviors from a variety of perspectives. Especially in the field of AIGC(‌Artificial Intelligence Generated Content) [23].

Based on the above discussion, it can be seen that ChatGPT makes it possible to interpret human language requests to generate code, which has sparked speculation in the field of medical education whether ChatGPT can be used effectively to assist in learning programming (such as completing basic programming tasks, or helping to write or debug some codes for complex projects), so as to reduce the extra time and effort spent by medical students on programming, and become a potential path to solve programming learning problems, while avoiding medical students’ loss of independent thinking ability to solve problems and critical thinking. As far as we know, medical students’ views on ChatGPT in programming learning have not been resolved in the literature. Therefore, based on the above theoretical model, this study undertook a grounded theory study among medical students to explore their experience and views on ChatGPT in programming learning, and to deeply analyze the influencing factors of medical students’ willingness to use ChatGPT. This study not only helps to understand the general view of current medical students on ChatGPT, but also provides references for the effective integration of ChatGPT into medical education.

Methods

Study design

This study adopted the classical grounded theory proposed by Glaser and Strauss [24], aiming to gradually summarize and explain the willingness of medical students to use ChatGPT in programming learning from real situations. This method emphasized building theories from the bottom up, by encoding and continuously comparing the collected raw data multiple times, allowing researchers to maintain sufficient theoretical sensitivity in the evolution of concepts and categories, thereby revealing the influencing factors of medical students using ChatGPT assisted programming.

Based on the above research ideas, the research design was based on the following five elements: (1) providing information for the sampling strategy according to the theoretical framework, so that the study could include medical students with rich experience and diverse perspectives. (2) Research participants were able to provide rich information about the phenomenon being investigated. (3) Data collection and analysis were carried out synchronously until new information could not generate additional theoretical extensions and reached saturation. (4) In the process of gradually clarifying theoretical insights, data collection might have been modified. (5) The research involved continuous comparison, integration, and reflection, ultimately forming a theoretical model rooted in empirical data.

Data source

The specific process of designing the interview outline is as follows: First, based on relevant research findings, TAM and UTAUT theories, a preliminary interview outline was designed. Then, to test the validity and clarity of the interview outline, five users were randomly selected for preliminary interviews, and based on the feedback from these preliminary interviews, necessary revisions were made to the interview outline. Finally, integrating the opinions and suggestions from the initial respondents, a formal interview outline was formed.

We conducted an open recruitment on the university’s online teaching platform, and the inclusion criteria for participants are first-year medical undergraduate students who took the university computer course for medical students from September 2023 to January 2024. No requirements for medical students’ majors to allow for a broader collection of medical student perspectives. We contacted students via email and conducted semi-structured interviews and completed a brief demographic survey through face to face interview. We compensated participants with 50 RMB for their time.

Background

The computer course in medical colleges is mainly set for first-year medical undergraduate students and is a compulsory course with a total of 72 h. The course content is mainly to teach basic computer knowledge and programming with Python, and is supplemented by extended content such as intelligent medicine and medical big data analysis (Appendix 1: The learning outlines and objectives of the course). Through the explanation and analysis of a series of medical application cases and programming learning practice, the course aims to help medical students improve their information literacy, establish computational thinking to analyze and solve medical professional problems, master frontier interdisciplinary knowledge, and cultivate interdisciplinary comprehensive ability.

In the first class, a questionnaire survey was conducted on the programming skills of medical students. The questionnaire divided programming skills into three levels, and students conducted self-assessment. In class, students used the non-paid version of ChatGPT-3.5 Turbo to assist with programming learning. Immediately after the course ended, participants were recruited and interviews were conducted.

Open questions

The grounded theory research method generally obtains the original data through interviews [25]. The interview focused on medical students’ cognition and willingness to use ChatGPT in programming learning and we created a semi-structured interview guide in Chinese (Appendix 2).

Data collection

Each participant was interviewed once. The general length of interviews was 30 min. The entire interview was conducted in a semi-structured format based on all interview questions. Audio recordings were made throughout the interview with the consent of the respondents. Finally, the collected interview recordings were organized into original text data for grounded theory coding analysis.

All participants signed informed consent forms and agreed with the interview contents in the summary without any changes. When participants’ quotes were used to illustrate the results, each participant agreed to quote their words in this study. Anonymity have been applied throughout.

Data analysis

Before encoding the interview data, a rigorous screening process was conducted based on two primary criteria: (1) the clarity of the descriptions and (2) the relevance of the descriptions to the topic. This process was completed by CX and XZ, who also integrated similar descriptions for subsequent analysis. To ensure comprehensive familiarity and sensitivity to the data, the researchers repeatedly read through the records and descriptions, thereby minimizing the risk of missing key information. The encoding process follows the three-step encoding(open coding, axial coding, and selective coding) described by Strauss and Corbin [26] and undergoes theoretical saturation testing. The analysis steps are shown in the table 1 below:

Table 1.

Coding analysis steps

Coding analysis steps Complete the task
1. open coding analysis Conduct initial organization, parsing, and extraction of the original corpus, extract and define concepts from it, and then categorize the concepts.
2. axial coding analysis Based on the results of open coding, further analyze the intrinsic connections between the initial categories, identify their hierarchical relationships, and gradually extract the main categories through continuous clustering and naming.
3. selective coding On the basis of open coding and axial coding, combined with the core category of research, further determine the relationship between the main category and the core category.
4. Theoretical saturation test Theoretical saturation refers to the analysis of new raw data that no longer generates new concepts and categories, as well as relationships between categories, in order to verify the reliability and consistency of the results of the three-level coding analysis.

CX and XZ encoded the interview text separately, and throughout the entire encoding process, any coding disagreements were resolved through discussions between CX and XZ, with CW serving as an arbitrator if consensus could not be reached. However, all discussions led to resolutions, and CW agreed with the coding decisions. Data analysis was supported by NVIVO 12.0 software.

Result

Quantitative results

Participants in this study were 30 first-year medical undergraduate students who took the university computer course for medical students from September 2023 to January 2024, with 18 females (60%) and 12 males (40%) from different medical majors There were differences in the programming related knowledge and skills possessed by the participants before participating in the university computer course. Table 2 shows the demographic characteristics of all participants.

Table 2.

Demographic characteristics

Demographic information (numbers expressed as percentages and (n))
Gender
 Male 40.0 (12)
 Female 60.0 (18)
Specialty
 Clinical medicine 33.3 (10)
 Basic Medicine 16.7 (5)
 Preventive medicine 20 (6)
 Pharmaceutical Sciences 20 (6)
 Nursing science 10 (3)
Previous programming experience/level
 Have certain programming skills 6.67 (2)
 Preliminary programming knowledge 23.3 (7)
 No programming experience 70.0 (21)

Open coding result

During the process of categorization, concept refinement was achieved by integrating initial concepts and excluding invalid statements. For example, when discussing the content generated by ChatGPT, respondents mentioned aspects such as “accuracy of information,” “whether the information meets needs,” and “whether the information is outdated,” which were collectively categorized as “information quality.” A total of 26 effective initial concepts were obtained through open coding, and after further association and clustering, a total of 10 independent categories were obtained, and the specific information is shown in Table 3.

Table 3.

Open coding analysis results

Original statement Initial concepts Independent categories
I’ve never been optimistic about ChatGPT, and not using it makes you less likely to be misled. Cognitive interest Willingness to accept technology
I think ChatGPT will probably save us a lot of time in programming, which is great news for medical students as we can devote more energy to medical research. Emotional response
Using ChatGPT to assist programming allows me to deepen my memory and expand my knowledge in error correction, and gradually makes me find the fun and confidence of programming. Increased confidence Perceived benefits
ChatGPT is my go-to “teaching assistant” who clearly answers questions I have when I don’t understand some syntax or don’t have programming mindsets. Improved programming ability
I’ve found myself leaning too heavily on ChatGPT at some point, and it worries me that this dependence may be holding me back from personal growth and improvement. Psychological risk Perceived risks
Asking ChatGPT to write programs without serious thinking is a natural way to cripple programming skills, or to stop people from improving programming skills. Ability development risk
I can communicate with ChatGPT in natural language without having to master complex programming jargon or syntax. Natural language interaction Technology advantage
ChatGPT’s ability to provide code examples with comments makes it easier for me to follow the program. Code generation and interpretation
Based on the questions I ask and the code I write, ChatGPT will recommend suitable materials and exercises to help me learn more efficiently and personally. Personalized learning
Open AI’s ChatGPT 4 is not available for free. Payment limit Technical barriers
ChatGPT works only when a VPN is used. Accessibility
Some ChatGPT tools may have a limit on using times. For example, some tools may allow users to use a certain number of queries for free, after which they need to pay or meet certain conditions to continue using them. Frequency limit
Sometimes when I ask ChatGPT programming questions, there is a delay and it doesn’t give me an immediate answer. Response speed
If we excessively rely on ChatGPT to complete programming tasks, or even directly use it to generate code or solutions, which could involve plagiarism and cheating. This is not only against the principle of academic integrity, but may also have a negative impact on our learning and growth. Academic integrity Technical ethics
In medical research, we often need to deal with the patient’s personal information and health data which are extremely sensitive. However, some ChatGPT tools may not provide adequate data privacy and security protection, leading to the risk of user data leakage. Data privacy security
ChatGPT may sometimes provide codes or recommendations based on incomplete or biased data, adversely affecting our medical research. Data bias
Once when I was writing a program for medical data analysis, ChatGPT gave some advice, but I later found that the advice had some logical errors and was not accurate. I think this is because ChatGPT doesn’t know enough about some specialized programming problems or domains, or it doesn’t fully understand my specific needs. Accuracy Information quality
Sometimes when I’m looking for a solution to a specific programming problem, ChatGPT’s answer, while syntactically and logically sound, doesn’t exactly match the problem I’m actually trying to solve. Correlation
When I asked for information about the latest programming techniques, ChatGPT’s answers seemed outdated and it didn’t provide the latest information I needed. Timeliness
Ask ChatGPT with the clearer and more detailed programming problems description, and the more standardized the prompt words, ChatGPT can provide a more targeted and accurate answer. Prompt words specification Information acquisition
ChatGPT sometimes gives programming statements that are beyond my learning scope and difficult to understand. Result presentation
I saw on the Xiao Hong Shu and BiliBili websites that ChatGPT is recommended by learning bloggers to be used to solve programming problems efficiently, and I became interested in ChatGPT. What is the great technology that is so powerful. Internet publicity Social impact
I found that my classmates could complete their homework quickly with the help of ChatGPT, so I also tried to use it. Classmates’ influence
ChatGPT can provide a better, shorter, and more efficient program. It lets us learn other ways which may be better in the process of independent programming. Extended learning Usage scenarios
During examination review, I may sometimes forget some basic knowledge points I have learned before, and then often forget a little bit of things, resulting in the whole program always reporting errors. Therefore, directly asking ChatGPT helped me to shorten the time of searching for slides of the course to a great extent. Examination review
When the program doesn’t work properly, ChatGPT can quickly help me find the error and fix it. Diagnosis and correction of errors

Axial coding result

Through the analysis and correlation of 10 independent categories, this paper eventually summarized five main categories: individual factors, technical factors, information factors, environmental factors, and willingness to use. Detailed information is shown in Table 4.

Table 4.

Spindle coding analysis results

Main categories Independent categories Meaning
Individual factors Perceived risks Subjective judgment of individual users on risk characteristics and severity after use.
Perceived benefits Actual benefits experienced by individual users after use.
Willingness to accept technology Individual users’ acceptance of the technology and willingness to use it.
Technical factors Technical barriers Difficulties and problems that users encounter when using ChatGPT.
Technical advantages The leading degree of ChatGPT compared with similar technologies.
Technical ethics The ethical relationship between technology and human and society.
Information factors Information quality Objective evaluation of the content of answers given by ChatGPT.
Information acquisition Users use ChatGPT to obtain the required information.
Environmental factors Usage scenarios Combination of various environmental factors or specific scenarios when using ChatGPT.
Social influence The influence of social factors on users’ decisions and behaviors.
Willingness to use Use positively Accept ChatGPT and have a positive intention to use it.
Use neutrally Unclear attitude towards ChatGPT and the willingness to use it is not obvious.
Use negatively Do not accept ChatGPT and have a negative intention to use it.

Selective coding result

Based on relevance and comparative analysis of five main categories: individual factors, technical factors, information factors, environmental factors and willingness to use, the core category of “influencing factors of medical students’ willingness to use ChatGPT in programming learning” was finally determined. The four main categories of individual factors, technical factors, information factors, and environmental factors had an impact on the acceptance of medical students’ use of ChatGPT assisted programming. The specific information is shown in Table 5.

Table 5.

Selective coding analysis results

Relationship structure Definition of relationship structure Representative statement

Individual factors

→Willingness to use

Individual factors such as perceived risks, perceived benefits, and technology acceptance willingness are the important factors that affect users' willingness to use. There is a causal relationship between individual factors and willingness to use.

1. Using ChatGPT may make me rely on it. Too much dependence will weaken my thinking and programming ability, which will affect my willingness to use it.

2. Using ChatGPT to assist programming improved my learning confidence and programming ability, and benefited me a lot.

3. I am open-minded about ChatGPT assisted programming because after all, programming is a challenge for me. If there is a kind of technology that can help me understand better and program, I would gladly accept it.

Technical factors →

Willingness to use

Technical factors such as technical advantages, technical ethics, and technical barriers are important factors affecting users’ willingness to use, and there is a causal relationship between technical factors and users’ willingness to use.

1. ChatGPT has obvious technical advantages and can solve programming problems and provide personalized learning for me. I am willing to use it.

2. ChatGPT may challenge the existing social order and social ethics, I should consider it;

3. ChatGPT access difficulties, the limit on usage times and use cost make me worry.

Information factors→

Willingness to use

Information quality and information acquisition are important factors affecting users’ willingness to use, and there is a causal relationship between information factors and willingness to use.

1. ChatGPT may provide imprecise programming information or advice to mislead me, which makes me worried;

2. The code given by ChatGPT is beyond my knowledge, which makes it hard for me to understand and influences my willingness to use it.

Environmental factors→Willingness to use Social influence and usage scenarios are important factors affecting user’s willingness to use, and there is a causal relationship between environmental factors and willingness to use.

1. At first through online publicity, I learned that ChatGPT is very intelligent. I was interested so I tried and found that it is indeed very useful;

2. Sometimes to complete the programming assignments, sometimes to broaden knowledge, I want to give it a try.

Construction of the influencing factors model framework of willingness to use

Based on the coding analysis results of grounded theory, this study constructed the influencing factors model and analysis framework of medical students’ willingness to use ChatGPT in programming learning. This model framework comprehensively considered the influence of four dimensions: individual factors, technical factors, information factors, and environmental factors, and provides a strong theoretical support to have a deeper understanding of the medical students’ willingness to use ChatGPT in programming learning. The specific information is shown in Fig. 1.

Fig. 1.

Fig. 1

The influencing factors model framework of willingness to use

Discussion

The survey results on the use of ChatGPT in programming learning among medical students show that each participant had a different cognition of ChatGPT, resulting in different willingness to use ChatGPT. The willingness of medical students to accept using ChatGPT in programming learning is mainly affected by four dimensions: individual factors, technical factors, information factors, and environmental factors.

Individual factors

In this study, a causal relationship between individual factors and the willingness to use was identified, which is consistent with the performance expectancy factor in UTAUT theory, that is, whether individuals think ChatGPT can help in learning programming.

The interview result reveals that the higher the cognitive interest of medical students in ChatGPT, the more likely they are to actively try and continue to use it, resulting in a higher willingness to use ChatGPT in the process of programming learning. On the contrary, they will resist using ChatGPT, which is consistent with the views of some scholars [27, 28]. The more positive emotional reactions (e.g., excitement, satisfaction, etc.) respondents mention, the more likely they are to use ChatGPT in their programming learning.

More than 1/3 of the respondents mentioned that using ChatGPT in programming learning has enhanced their learning confidence or improved their programming ability, and these benefits and gains have brought positive experience effects to medical students, thereby increasing their willingness to use ChatGPT in programming learning.

The attitude towards ChatGPT is significantly affected by perceived risk, and the use intention is directly affected by attitude, which is consistent with the view of previous study [29]. Some respondents expressed concern that ChatGPT would become psychologically dependent on them, affecting their ability to think and solve programming problems, and ultimately impairing their future growth and development.

According to the individual factor analysis, ChatGPT shows good technology and functions in the programming learning of medical students, and meets some expectations of users. However, the risks brought by ChatGPT will make medical students hesitate and worry when using it, which should also be paid attention to by educators.

Technical factors

The research results show that technical factors are important factors influencing the willingness of medical students to use ChatGPT when learning programming, which is confirmed by the TAM theory.

In terms of technical advantages, compared with other artificial intelligence technologies, ChatGPT can provide users with personalized learning experience through natural language interaction, and provide targeted guidance and help according to the individual’s learning progress and difficulties. More than half of the respondents said that this mode of personalization makes it more efficient to solve their own programming problems during programming learning. This participant’s evaluation of the practical benefits that ChatGPT can bring in programming learning, thereby affecting their adoption of the technology, is in line with TAM theory.

Research shows that the attitude towards ChatGPT is affected by price consciousness, and the willingness to use is directly affected by the attitude [30]. This study found that among the factor of technical barriers, the problems and difficulties encountered when using ChatGPT, including limited access to ChatGPT, limited usage frequency (after a certain number of free queries, payment is required or specific conditions must be met to continue using ChatGPT), and version fee restrictions (higher versions of ChatGPT must be paid for), are one of the important factors effecting the willingness of medical students to use ChatGPT. In addition, some respondents reported that ChatGPT may have delays in processing a large number of requests. ChatGPT’s response speed may sometimes fail to meet the need for immediate help and guidance when writing code, which poses difficulties and challenges for medical students when using ChatGPT to assist in programming. The aforementioned inconveniences and challenges encountered by participants during their learning and utilization of ChatGPT have the potential to diminish their willingness to adopt this technology, which is consistent with the TAM theory.

With the rapid development of artificial intelligence technology, technology ethical issues have also become increasingly prominent. Some respondents expressed concerns about data privacy and security, academic integrity, data bias, and other issues that ChatGPT may bring, which is consistent with some existing research findings [28, 31, 32]. In particular, some respondents said that in medical research, medical students often need to program to analyze patient personal information and medical data, which is extremely sensitive and may be at risk of data leakage when using ChatGPT to assist in programming.

According to technical factors analysis, when medical students use ChatGPT in programming learning, they should not only be good at giving play to the technical advantages of ChatGPT but also need to pay attention to problems such as technical barriers and technical ethics. It deserves developers and relevant institutions’ full consideration how to effectively prevent the technical risks brought by ChatGPT and strengthen the construction of technical ethics, to promote the integration of artificial intelligence technology and medical education. Furthermore, a prior study noted that due to economic conditions, network infrastructure, and other issues, medical students in economically underdeveloped areas may face difficulties in conveniently accessing and utilizing ChatGPT [32]. This inequality limits the learning opportunities for medical students in economically underdeveloped areas and potentially exacerbates the unequal distribution of educational resources. Consequently, when promoting AIGC tools in medical education, special attention should be paid to addressing usage barriers faced by students from different regions, ensuring fair access to educational resources.

Information factors

Information quality is considered a key factor in attracting users to use ChatGPT [33]. A study revealed that the credibility of information can affect users’ trust in chatbots, which in turn affects their willingness to use [34]. This finding aligns with the survey results of respondents who value the quality of ChatGPT replies. To avoid serious consequences caused by errors in academic research and medical practice, it is particularly significant for medical students to pursue the information quality of ChatGPT in the learning process.

Some respondents mentioned that if they want ChatGPT to write code, detailed instructions need to be given, and sometimes the resulting coding process is complex, which can lead to problems that they have not learned in the course. Some respondents also pointed out that they could not obtain the desired result presentation form (such as pictures of data analysis results). There are specific requirements for medical students who are new to programming when inputting prompt words, and the generated results are often presented in a single form, which limits the medical students ‘access to effective programming information and affects their willingness to use ChatGPT. This discovery confirms the effort expectancy factor in UTAUT theory, which refers to the level of effort required by respondents when using ChatGPT assisted programming.

It can be seen that under the current generation mode and content of ChatGPT, it is a problem to be considered by medical educators how to help medical students effectively obtain, analyze, evaluate, and use the generated results, to better guide the learning and development of medical students.

Environmental factors

Studies have shown that through social media and other digital platforms, users are more exposed to and curious about artificial intelligence [32], Social media and peer influence were recorded as contributing factors that motivate respondents to use ChatGPT [35, 36]. The strong promotion of ChatGPT’s functions and advantages by traditional and new media has made ChatGPT become one of the hottest artificial intelligence technologies currently available. At the same time, the positive evaluation and recommendation of people around also further enhanced the recognition and acceptance of ChatGPT assisted programming by medical students. To some extent, this social influence stimulates the willingness of medical students to use ChatGPT to assist programming. It can be seen that the influence perceived by respondents from their surrounding group will impacts their willingness to use ChatGPT, which is consistent with the social influence factors mentioned in the UTAUT theory.

According to the respondents, medical students have a positive intention to use in task situations of expanding programming ideas, reviewing for programming examinations, and correcting and revising programming errors. In these specific scenarios, ChatGPT can provide targeted learning resources and solutions to help medical students complete programming tasks more efficiently.

It can be seen that the information environment such as social media and online forums not only shape the cognitive framework of medical students towards new technologies, but also influence their willingness to use them through communication mechanisms such as opinion leaders and peer groups. Therefore, fully utilizing information dissemination channels such as social media and digital platforms for promotion may increase medical students’ awareness and acceptance of ChatGPT. In addition, when schools, management departments, and teachers actively encourage the use of ChatGPT among medical students, such guidance may also enhance students’ willingness to adopt the technology. However, misleading information that may exist in media publicity or negative evaluation of ChatGPT from close social relationships may lead to misunderstandings or biases among medical students regarding the use of ChatGPT. Medical students should pay attention to whether they can maintain rational thinking and make decisions based on their own needs and actual situations.

Measures and suggestions

This paper uses the grounded theory method to analyze the antecedents and results of medical students’ willingness to use ChatGPT in programming learning and puts forward measures and suggestions based on the data analysis results.

Be careful about the risks brought by ChatGPT and pay attention to ethics education

The current worries about the application of ChatGPT in medical education mainly come from the possibility of academic misconduct of medical students, medical data privacy leakage, and biased medical data feedback. Under this risk background, the use of ChatGPT has been currently explicitly prohibited by some education departments or universities. However, ChatGPT has inevitably affected the development of medical education, and how to make a good response strategy is the issue that educators and relevant institutions should focus on [37]. On the one hand, schools can establish usage scenarios and application guidelines for AIGC tools such as ChatGPT, clarify ethical boundaries, prohibit behaviors that rely entirely on tools without proper citation, and establish disciplinary mechanisms for violations of guidelines to ensure the appropriate utilization of artificial intelligence in education. At the same time, integrity detection software or tools can be combined at the technical level to help identify the results directly generated by ChatGPT. On the other hand, integrity education and ethics education should be strengthened to help medical students understand the importance of academic integrity and the seriousness of the consequences of plagiarism and cheating and develop the skills and habits needed to use ChatGPT responsibly and ethically use ChatGPT in programming learning.

Strengthen the cultivation of critical thinking and establish ChatGPT application case base

In medical education, cultivating medical students’ higher-order thinking skills (especially critical thinking, interdisciplinary thinking, and creative thinking) is a need for cultivating innovative compound medical talents in the future society. In terms of instructional design, teachers can set more open-ended thinking training questions (such as argumentation questions that require expressing viewpoints, empirical problems that require data collection and analysis), while reducing overly closed and limited Q&A questions, enabling medical students to fully experience the entire process from needs analysis, system design to coding implementation, and avoid directly using ChatGPT to generate answers; In terms of evaluation methods, teachers can assess medical students’ understanding of programming design through activities such as sharing practical ideas and group discussions to comprehensively evaluate their knowledge and ability levels. Through open-ended questions and comprehensive assessments, medical students can develop their independent analytical skills and critical thinking, reducing the likelihood of solely relying on AIGC tools to complete assignments. In addition, teachers can collect and sort out the successful application cases of ChatGPT in programming education (the practical application effects of ChatGPT in assisting programming learning, cultivating innovative thinking, and improving learning efficiency), to provide reference and reference for programming teaching for medical students, and also help teachers to better understand and use ChatGPT for medical education innovation.

Carry out personalized learning guidance to improve the key literacy of programming

Driven by the educational concept of paying attention to the cultivation of higher-order thinking ability, the rich information, knowledge, and resources in ChatGPT technical corpus will promote the innovation of teaching methods and teaching content, so as to help medical students gradually develop higher-order thinking ability and autonomous learning ability. Teachers should actively guide medical students to find the problems they encounter in programming learning, and through the “tool-aided, problem-driven” teaching method, make full use of ChatGPT’s interactivity and intelligence, encourage medical students to seek learning support from ChatGPT, including programming debugging assistance, explanation of complex concepts, and customized programming task solutions. This personalized training can provide more flexible and efficient learning experiences tailored to the varying foundations of medical students. In addition, because ChatGPT has good multilingual code generation and debugging capabilities, for basic programming teaching, it is necessary to pay more attention to the cultivation of medical students ‘computational thinking, artificial intelligence literacy and interdisciplinary literacy, and reduce the memory learning of grammatical details in the programming language. For instance, by leveraging the large number of clinical and research cases generated through ChatGPT, medical students can gain a deeper understanding of the significance of programming learning in medical practice problems. Additionally, with the professional advice and code cases provided by ChatGPT, language barriers can be reduced, enabling medical students to focus more on leveraging information technology for technical practice and innovative research in the medical field.

Limitations

This study has certain limitations. On the one hand, the sample size of this study is relatively small and only targeted at first-year undergraduate students participating in the course, which may result in biased samples that do not represent a wider range of regions and populations. Future research could expand the sample size and focus on students from different grades abd medical schools across different regions to improve the accuracy of the findings. On the other hand, this study only conducted interviews and focuses exclusively on ChatGPT, without considering other AIGC tools, which may limit the comprehensiveness of the research results. Subsequent studies could incorporate questionnaire data, combining quantitative and qualitative methods, and introduce other AIGC tools for comparative analysis to conduct deeper studies.

Conclusion

This study constructs a theoretical framework for the influencing factors of medical students’ willingness to use ChatGPT in programming learning through grounded theory qualitative research paradigm analysis. It deepens the understanding of medical students’ experience and cognition of ChatGPT, and also compensates for the shortcomings of existing literature on medical students’ views on the use of ChatGPT in programming learning. The results show that the willingness of different students to use ChatGPT in programming learning is divided into three types according to the subjective use degree of users: use positively, use neutrally, and use negatively. In addition, it is found that individual factors, technical factors, information factors, and environmental factors are four important dimensional factors that affect ChatGPT usage willingness. According to the analysis of influencing factors, measures and suggestions are put forward such as preventing risks and paying attention to ethical education, cultivating critical thinking and establishing case base, and personalized teaching to improve the core literacy of programming, providing necessary reference for the effective integration of AIGC tools such as ChatGPT into medical education.

In future research, the theoretical framework proposed in this study, concerning medical students’ willingness to use ChatGPT in programming learning, can be further validated for its universality and applicability through quantitative studies. In addition, targeted case studies or experiments can be designed to further refine and validate key concepts and hypotheses in the theory, thereby improving the theoretical framework.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1 (11.1KB, xlsx)
Supplementary Material 2 (15.7KB, docx)

Acknowledgements

The authors would like to thank the students who participated in the interview.

Author contributions

Study conception and design: H.Q. and C.W.; Data collection, manuscript writing and preparation: C.W.; Data analysis: All authors; All authors read and approved the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Peking University’s AI-Enhanced Curriculum Development (2024AI28); Research Project on Computer Basic Education Teaching by the National Association for Computer Basic Education in Higher Education Institutions(2024-AFCEC-008).

Data availability

The data sets generated during or analyzed during this study are available from the corresponding author upon reasonable request.

Declarations

Ethical approval and consent to participate

The participants of this study were medical students who participated in university computer courses. The interview was conducted face-to-face and informed consent was taken. Before the interview, the contents of the informed consent form and the outline of the interview were read to the participants, and all participants gave signed informed consent. The Institutional Research Board of Peking University approved the informed consent and the full research proposal. (IRB00001052-24025).

Consent for publication

Not applicable. There are no details on individuals reported within the manuscript.

Conflict of interest

The authors declare that they have no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Mantas J, Ammenwerth E, Demiris G, Hasman A, Haux R, Hersh W, Hovenga E, Lun KC, Marin H, Martin-Sanchez F, Wright G, IMIA Recommendations on Education Task Force. Recommendations of the International Medical Informatics Association (IMIA) on Education in Biomedical and Health Informatics. First Revision Methods Inform Med. 2010;49(2):105–20. Advance online publication. [DOI] [PubMed] [Google Scholar]
  • 2.Zatz MM. Bioinformatics training in the USA. Brief Bioinform. 2002;3(4):353–60. [DOI] [PubMed] [Google Scholar]
  • 3.Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, Sam J, Haynes RB. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA,2005,293(10),1223–38. [DOI] [PubMed]
  • 4.Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Morton SC, Shekelle PG. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742–52. [DOI] [PubMed] [Google Scholar]
  • 5.Hersh WR, Hickam DH, Severance SM, Dana TL, Pyle Krages K, Helfand M. Diagnosis, access and outcomes: update of a systematic review of telemedicine services. J Telemed Telecare. 2006;12 Suppl 2:S3–31. [DOI] [PubMed] [Google Scholar]
  • 6.Meo SA, Al-Masri AA, Alotaibi M, Meo MZS, Meo MOS. ChatGPT Knowledge evaluation in Basic and Clinical Medical sciences: multiple choice question examination-based performance. Healthc (Basel). 2023;11(14):2046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Surapaneni KM, Rajajagadeesan A, Goudhaman L, Lakshmanan S, Sundaramoorthi S, Ravi D, Rajendiran K, Swaminathan P. Evaluating ChatGPT as a self-learning tool in medical biochemistry: a performance assessment in undergraduate medical university examination. Biochem Mol Biol Educ. 2023:38112255. [DOI] [PubMed]
  • 8.Baglivo F, De Angelis L, Casigliani V, Arzilli G, Privitera GP, Rizzo C. Exploring the possible use of AI chatbots in Public Health Education: feasibility study. JMIR Med Educ. 2023;9:e51421. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Al Qurashi AA, Albalawi IAS, Halawani IR, Asaad AH, Al Dwehji AMO, Almusa HA, Alharbi RI, Alobaidi HA, Alarki SMKZ, Aljindan FK. Can a machine Ace the Test? Assessing GPT-4.0’s Precision in plastic surgery Board examinations. Plast Reconstr Surg Global Open. 2023;11(12):e5448. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Dhanvijay AKD, Pinjar MJ, Dhokane N, Sorte SR, Kumari A, Mondal H. Performance of large Language models (ChatGPT, Bing Search, and Google Bard) in solving Case vignettes in Physiology. Cureus. 2023;15(8):e42972. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Totlis T, Natsis K, Filos D, Ediaroglou V, Mantzou N, Duparc F, Piagkou M. The potential role of ChatGPT and artificial intelligence in anatomy education: a conversation with ChatGPT. Surg Radiol Anat. 2023;45(10):1321–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Gupta R, Park JB, Herzog I, Yosufi N, Mangan A, Firouzbakht PK, Mailey BA. Applying GPT-4 to the plastic surgery inservice training examination. J Plast Reconstr Aesthetic Surg. 2023;87:78–82. [DOI] [PubMed] [Google Scholar]
  • 13.Tsang R. Practical applications of ChatGPT in Undergraduate Medical Education. J Med Educ Curric Dev. 2023;10:23821205231178449. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wang X, Sanders HM, Liu Y, Seang K, Tran BX, Atanasov AG, Qiu Y, Tang S, Car J, Wang YX, Wong TY, Tham YC, Chung KC. ChatGPT: promise and challenges for deployment in low- and middle-income countries. Lancet Reg Health Western Pac. 2023;41:100905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Mondal H, Marndi G, Behera JK, Mondal. Shaikat. ChatGPT for teachers: practical examples for utilizing Artificial Intelligence for Educational purposes. Indian J Vascular Endovascular Surg 2023;10(3):200–5.
  • 16.Riedel M, Kaefinger K, Stuehrenberg A, Ritter V, Amann N, Graf A, Recker F, Klein E, Kiechle M, Riedel F, Meyer B. ChatGPT’s performance in German OB/GYN exams - paving the way for AI-enhanced medical education and clinical practice. Front Med (Lausanne). 2023;10:1296615. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Abd-Alrazaq A, AlSaad R, Alhuwail D, Ahmed A, Healy PM, Latifi S, Aziz S, Damseh R, Alabed Alrazak S, Sheikh J. Large Language models in Medical Education: opportunities, challenges, and future directions. JMIR Med Educ. 2023;9:e48291. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;13(3):319–40. [Google Scholar]
  • 19.Cambra-Fierro JJ, Blasco MF, López-Pérez MEE, et al. ChatGPT adoption and its influence on faculty well-being: an empirical research in higher education. Educ Inf Technol; 2024.
  • 20.Khan MR, Siddiqui F, Khan MA, Rasool Y. Student’s online learning experience during Covid-19: Integrating behavioral and technological factors. Global Conference on Education & Research (GLOCER 2021), University of South Florida, USA.
  • 21.Omar Ali PA, Murray M, Momin FS, Al-Anzi. The knowledge and innovation challenges of ChatGPT: a scoping review. Technol Soc 2023,75:102402.
  • 22.Venkatesh V, Morris MG, Davis GB, Davis FD. User Acceptance of Information Technology: toward a unified view. MIS Q. 2003;27(3):425–78. [Google Scholar]
  • 23.Devadas Menon K, Shilpa. Chatting with ChatGPT: Analyzing the factors influencing users’ intention to Use the Open AI’s ChatGPT using the. UTAUT Model Heliyon 2023,9(11):e20962. [DOI] [PMC free article] [PubMed]
  • 24.Glaser B, Strauss AL. The discovery of grounded theory: strategy for qualitative research. Nurs Res. 1968;17(4):377–80. [Google Scholar]
  • 25.Glaser BG, Strauss AL. The Discovery of grounded theory: strategies for qualitative research. Aldine Publishing Company; 1967.
  • 26.Corbin JM, Strauss A. Grounded theory research: procedures, canons, and evaluative criteria. Qual Sociol. 1990;13(1):3–21. [Google Scholar]
  • 27.Temsah MH, Aljamaan F, Malki KH, Alhasan K, Altamimi I, Aljarbou R, Bazuhair F, Alsubaihin A, Abdulmajeed N, Alshahrani FS, Temsah R, Alshahrani T, Al-Eyadhy L, Alkhateeb SM, Saddik B, Halwani R, Jamal A, Al-Tawfiq JA, Al-Eyadhy A. ChatGPT and the Future of Digital Health: a study on Healthcare workers’ perceptions and expectations. Healthc (Basel). 2023;11(13):1812. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Abu Hammour K, Alhamad H, Al-Ashwal FY, Halboup A, Abu Farha R, Abu Hammour A. ChatGPT in pharmacy practice: a cross-sectional exploration of Jordanian pharmacists’ perception, practice, and concerns. J Pharm Policy Pract. 2023;16(1):115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Sallam M, Salim NA, Barakat M, Al-Mahzoum K, Al-Tammemi AB, Malaeb D, Hallit R, Hallit S. Assessing Health students’ attitudes and usage of ChatGPT in Jordan: Validation Study. JMIR Med Educ. 2023;9:e48254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Choudhury A, Shamszare H. Investigating the impact of user trust on the adoption and use of ChatGPT: Survey Analysis. J Med Internet Res. 2023;25:e47184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Mosleh R, Jarrar Q, Jarrar Y, Tazkarji M, Hawash M. Medicine and Pharmacy Students’ knowledge, attitudes, and practice regarding Artificial Intelligence Programs: Jordan and West Bank of Palestine. Adv Med Educ Pract. 2023;14:1391–400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Oluwadiya KS, Adeoti AO, Agodirin SO, Nottidge TE, Usman MI, Gali MB, Onyemaechi NO, Ramat AM, Adedire A, Zakari LY. Exploring artificial intelligence in the Nigerian medical educational space: an online cross-sectional study of perceptions, risks and benefits among students and lecturers from ten universities. Niger Postgrad Med J. 2023;30(4):285–92. [DOI] [PubMed]
  • 33.Menon D, Shilpa K. Chatting with ChatGPT: analyzing the factors influencing users’ intention to use the Open AI’s ChatGPT using the UTAUT model. Heliyon. 2023;9(11):e20962. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Yen C, Chiang MC. Trust me, if you can: a study on the factors that influence consumers’ purchase intention triggered by chatbots based on brain image evidence and self-reported assessments. Behav Inform Technol Mar. 2020;24(11):1177–94. [Google Scholar]
  • 35.Mogaji E, Balakrishnan J, Nwoba AC, Nguyen NP. Emerging-market consumers’ interactions with banking chatbots. Telematics Inform. 2021;65:101711. [Google Scholar]
  • 36.Terblanche N, Kidd M. Adoption factors and moderating effects of age and gender that influence the intention to use a non-directive reflective coaching chatbot. Sage Open. 2022;12(2):21582440221096136. [Google Scholar]
  • 37.Sun GH, Hoelscher SH. The ChatGPT Storm and what Faculty can do. Nurse Educ. 2023;01(3):119–24. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material 1 (11.1KB, xlsx)
Supplementary Material 2 (15.7KB, docx)

Data Availability Statement

The data sets generated during or analyzed during this study are available from the corresponding author upon reasonable request.


Articles from BMC Medical Education are provided here courtesy of BMC

RESOURCES