Abstract
Background
Artificial intelligence (AI) tools show great potential in the creation of health education materials, yet the factors influencing their adoption and user experience remain underexplored.
Objective
This study aims to investigate the factors associated with medical students’ experience in using AI tools to create health education materials on the basis of an extended unified theory of acceptance and use of technology (UTAUT) model that incorporates content-related perceptions.
Methods
A cross-sectional survey was conducted among students at a medical university in Chongqing, China, from October 17 to 30, 2024. A total of 691 valid responses were analysed. The extended UTAUT model includes performance expectancy, effort expectancy, social influence, facilitating conditions, and four content perception variables: perceived scientificity, understandability, creativity, and misinformation of AI-generated content. Hierarchical logistic regression analysis was conducted, and predictors were entered into three blocks: (1) demographics, (2) core UTAUT constructs, and (3) extended content perceptions. Content analysis was used to explore thematic differences.
Results
Among the 691 participants, 314 (45.4%) had experience using AI tools to create health education materials. Hierarchical regression revealed that clinical medicine majors had more than double the odds of experience (OR = 2.096, P < 0.001), as did paid AI tools (OR = 2.789, P < 0.001) in Model 1. The core UTAUT constructs significantly improved explanatory power, with social influence (OR = 1.268, P = 0.001) and facilitating conditions (OR = 1.561, P < 0.001) as key drivers in Model 2. In contrast, perceptions of generated content quality did not significantly predict usage experience, whereas a lower educational level was significantly associated with higher odds of AI tool use (OR = 0.732, P = 0.03) in Model 3. Content analysis showed that experienced users emphasized content verification and rational use, whereas nonusers expressed more caution and a stronger need for training. Both groups agreed that AI should serve as an assisting tool in creating health education materials.
Conclusion
Social influence and facilitating conditions may be more strongly associated with experience than with perceptions of content quality in this cohort. Enhancing facilitating conditions, social support and targeted training may promote more effective use of AI in health education.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12909-025-08499-4.
Keywords: Artificial intelligence, Health education, Material creation, UTAUT, Influencing factors, Cross-sectional study
Introduction
With the rapid development of artificial intelligence (AI) technology, its applications have become increasingly widespread across various fields [1–3], particularly demonstrating immense potential in health education and medical information dissemination [4, 5]. AI tools can not only efficiently generate diverse health education materials—including text, images, audio, and video—but also customize them according to the characteristics of the target audience [6–8]. For example, AI chatbots, particularly ChatGPT, are increasingly employed as information sources [9]. Studies have shown that AI-generated health education materials can achieve acceptable accuracy, readability, and understandability, which are crucial for effective patient education [10–12].
However, the adoption and practical application of AI tools in health education remain inconsistent. Variations in users’ perceptions, skills, AI literacy [13], and contextual factors [14] may influence whether and how these AI tools are utilized [15]. As future medical professionals, medical students’ acceptance, attitudes and usage experience of AI tools will directly influence the application of AI tools in the future [16, 17]. Therefore, understanding medical students’ experiences and influencing factors in the use of AI tools for creating health education materials is crucial for optimizing AI tools and enhancing the quality of health education.
The unified theory of acceptance and use of technology (UTAUT) provides a robust theoretical framework for investigating technology adoption behaviors [18]. It postulates that performance expectancy (PE), effort expectancy (EE), social influence (SI), and facilitating conditions (FC) are key factors influencing individuals’ intentions to utilize technology [19]. Nevertheless, the specific mechanisms of action and relative importance of these factors in diverse application scenarios require further exploration [20].
However, in the specific context of creating health education materials, prevailing technology acceptance models such as UTAUT present inherent limitations. They effectively assess perceptions of the tool itself but largely overlook user evaluations of the quality of the generated output. AI-generated health materials need to be scientifically accurate, understandable, creative, and free from misinformation [21, 22]. Therefore, to address this gap, we extend the UTAUT model by incorporating content-related perceptions, such as perceived scientificity [22], understandability, creativity [23], and the potential for misinformation.
Despite the clear relevance of content perceptions, few empirical studies have applied the UTAUT model to explore the use of AI tools for creating health education materials, especially among medical students. This study aims to investigate the factors associated with medical students’ experience of using AI tools to create health education materials on the basis of the extended UTAUT framework.
Methods
Study design
This cross-sectional survey was conducted among university students at a medical university in Chongqing, China, from October 17 to October 30, 2024. We used convenience sampling to recruit participants. To ensure a diverse and representative sample, we considered the different academic levels and majors within the university as key stratification factors. The academic levels included undergraduate, master’s, and doctoral students, while the majors included clinical medicine and nonclinical medicine. The inclusion criteria were as follows: currently enrolled students who voluntarily participated and signed the electronic informed consent form. The exclusion criteria were as follows: graduated from this institution; incomplete questionnaires; and questionnaires with evident information errors. The Questionnaire Star link was distributed to students through their counsellors, who were asked to distribute the survey link to the students under their guidance. This method made it impossible to trace individual invitations and calculate an exact response rate. Before the link was distributed, the students were informed about the purpose of the study, and those who agreed to participate signed the informed consent form on the Questionnaire Star platform. All the data collected were maintained with strict confidentiality and anonymity.
Extended UTAUT framework
The unified theory of acceptance and use of technology by Venkatesh [24]. This study provides a solid theoretical basis for understanding technology adoption behaviors [23, 25], postulating that four core constructs—PE, EE, SI, and FC—are key determinants of individuals’ intentions to use technology and then affect actual usage behavior. PE refers to how much individuals think that using a technology will boost job performance or task efficiency. EE represents the ease of using a technology. The SI measures the impact of significant others on an individual’s technology use decision. FC refers to the availability of resources and support systems that facilitate the use of a technology. Additionally, we incorporated several content-related perception variables to capture the nuanced aspects of AI-generated content for creating health education materials. These variables include perceived scientificity of content generated (PSCG), perceived understandability of content generated (PUCG), perceived creativity of content generated (PCCG), and perceived misinformation of content generated (PMCG) [5, 17, 22]. By integrating these extended content perception variables into the UTAUT framework, our study provides a comprehensive understanding of the factors influencing medical students’ adoption and usage experience of AI tools for creating health education materials.
Hypotheses
On the basis of the extended UTAUT model presented in Fig. 1, we formulate 8 hypotheses regarding the factors associated with medical students’ experience of using AI tools to create health education materials. The hypotheses are listed below:
H1 PE is positively associated with the experience of AI tools for creating health education materials.
H2 EE is positively associated with the experience of AI tools for creating health education materials.
H3 SI is positively associated with the experience of AI tools for creating health education materials.
H4 FC is positively associated with the experience of AI tools for creating health education materials.
H5 PSCG is positively associated with the experience of AI tools for creating health education materials.
H6 PUCG is positively associated with the experience of AI tools for creating health education materials.
H7 PCCG is positively associated with the experience of AI tools for creating health education materials.
H8 PMCG is negatively associated with the experience of AI tools for creating health education materials.
Fig. 1.

The hypothesized extended UTAUT framework for AI tool experience in health education material creation. It illustrates the hypothesized relationships between the core UTAUT constructs (PE, EE, SI, FC), the extended constructs (PSCG, PUCG, PCCG, PMCG), and the experience of using AI tools for creating health education materials. Solid lines represent positive hypotheses, while the dashed line indicates a negative hypothesis. PE: performance expectancy; EE: effort expectancy; SI: social influence; FC: facilitating conditions; PSCG: perceived scientificity of content generated; PUCG: perceived understandability of content generated; PCCG: perceived creativity of content generated; PMCG: perceived misinformation of content generated
Questionnaire
The questionnaire was developed through a structured process to ensure content validity. Initial items for the UTAUT constructs (PE, EE, SI and FC) and the four content perception variables (PSCG, PUCG, PCCG and PMCG) were generated on the basis of a comprehensive review of the relevant literature [14, 18, 20, 26]. The draft questionnaire then underwent iterative revisions through in depth, face-to-face discussions among our research team experts to validate its relevance and clarity for the target context. A pilot test was conducted with a small group of medical students (n = 10) to refine the wording and assess practicality. It consists of three parts: basic demographic characteristics, the extended UTAUT-related scales, and an open-ended question. The first part was designed to gather students’ general information, including gender, age range, educational level, major, usage of paid AI tools, and participation in AI tool training. In the second part, PE, EE, SI, and FC were measured via eight items developed by the research team. Four items were developed to measure the extended variables for PSCG, PUCG, PCCG and PMCG. The internal consistency (Cronbach’s alpha) for the scales was as follows: overall scale = 0.71; PE = 0.531; EE = 0.778; SI = 0.996; FC = 0.649; and content perception construct = 0.713. The participants rated each of the 12 items on a 5-point Likert scale (1 = strongly disagree to 5 = strongly agree), unless stated otherwise. Scores for multi-item constructs were calculated by summing individual item scores. The full list of all survey items used to measure each construct is provided in Supplementary Material 1. In the third section, an open-ended query was designed regarding perspectives on the utilization of AI tools in the development of health education materials for medical students.
Statistical analysis
The data were analysed via SPSS version 22.0 (IBM Corp). To evaluate the differences among groups, the independent samples t test or chi-square test was utilized. We conducted a hierarchical logistic regression analysis. The variables were entered into three blocks: (1) demographics; (2) core UTAUT constructs (PE, EE, SI, and FC); and (3) extended constructs (PSCG, PUCG, PCCG, and PMCG). P < 0.05 was considered statistically significant. A qualitative content analysis was conducted to compare student attitudes, concerns, and suggestions. Chinese text responses were preprocessed to prepare for analysis. High-frequency keywords were identified and normalized by merging similar terms via domain knowledge. Two researchers independently performed thematic categorization, resolving differences through discussion. Microsoft Excel was used for data management and keyword frequency calculations.
Results
Participant characteristics
A total of 770 questionnaires were collected, and 691 valid responses were retained after 79 submissions that were deemed invalid due to abnormally short completion times were excluded. The general characteristics of the study sample are presented in Table 1. Among the 691 respondents, 314 (45.4%) had used AI tools to create health education materials, whereas 377 (54.6%) had not. A comparison of the general characteristics of these two groups was performed. The chi-square test revealed significant differences between the two groups in terms of different age ranges (χ²=9.907, P = 0.01), educational levels (χ²=26.885, P < 0.001), and majors (χ²=30.736, P < 0.001). Moreover, there were also significant differences between the two groups in terms of whether they had used paid AI tools (χ²=36.748, P < 0.001) and whether they had attended training sessions on AI tools (χ²=14.122, P < 0.001).
Table 1.
Comparison of general characteristics according to experience of AI tools for creating health education materials (N = 691)
| Variables | All participants | Experience of AI tools for creating health education materials | |||
|---|---|---|---|---|---|
| (N = 691) | Yes (n = 314) | No (n = 377) | χ² | P value | |
| Gender, n (%) | |||||
| Male | 389(56.30) | 174(55.41) | 215(57.03) | 0.182 | 0.67 |
| Female | 302(43.70) | 140(44. 59) | 162(42.97) | ||
| Age range (years), n (%) | |||||
| 18–25 | 453(65.56) | 225(71.66) | 228(60.48) | 9.907 | 0.01 |
| 26–30 | 135(19.54) | 48(15.29) | 87(23.08) | ||
| 31- | 103(14.90) | 41(13.05) | 62(16.44) | ||
| Education level, n (%) | |||||
| Undergraduate | 331 (47.90) | 182 (57.96) | 149 (39.52) | 26.885 | < 0.001 |
| Master students | 166 (24.02) | 52 (16.56) | 114 (30.24) | ||
| Doctoral students | 194(28.08) | 80 (25.48) | 114 (30.24) | ||
| Major, n (%) | |||||
| Clinical medicine | 376(54.41) | 207(65.92) | 169 (44.83) | 30.736 | < 0.001 |
| Nonclinical medicine | 315(45.59) | 107 (34.08) | 208(55.17) | ||
| Use paid AI tools, n (%) | |||||
| Yes | 154(22.29) | 103(32.80) | 51(13.53) | 36.748 | < 0.001 |
| No | 537(77.71) | 211(67.20) | 326(86.47) | ||
| AI tools training, n (%) | |||||
| Yes | 67(9.70) | 45(14.33) | 22(5.84) | 14.122 | < 0.001 |
| No | 624(90.30) | 269(85.67) | 355(94.16) | ||
Descriptive statistics of the study variables
Table 2 presents the descriptive statistics of the measured study variables on the basis of the experience of using AI tools to create health education materials. Concerning the UTAUT-related variables, participants who had experience using AI tools to create health education materials presented higher levels of performance expectancy (P < 0.001), social influence (P < 0.001), and facilitating conditions (P < 0.001) than did those without such experience. Nevertheless, for effort expectancy, no significant difference was observed (P = 0.11). In the experienced group, the perceived scientificity, understandability, creativity, and misinformation of the AI-generated content increased. The corresponding P values were < 0.001, < 0.001, < 0.001, and 0.02, respectively.
Table 2.
Scores on UTAUT and content perception constructs, stratified by experience with AI tools for the creation of health education materials(N = 691)
| Variables | All participants | Experience of AI tools for creating health education materials | |||
|---|---|---|---|---|---|
| (N = 691) | Yes (n = 314) | No (n = 377) | t value | P value | |
| Performance expectancy (score rang: 2–10) | 8.13(1.13) | 8.43(1.00) | 7.88(1.17) | 6.630 | < 0.001 |
| Effort expectancy (score rang: 2–10) | 6.82(1.14) | 6.90(1.12) | 6.76(1.15) | 1.610 | 0.11 |
| Social influences (score rang: 2–10) | 7.23(1.48) | 7.69(1.41) | 6.85(1.44) | 7.690 | < 0.001 |
| Facilitating conditions (score rang: 2–10) | 6.64(0.81) | 6.88(0.84) | 6.44(0.73) | 7.500 | < 0.001 |
| PSCGa (score rang: 1–5) | 3.75(0.78) | 3.92(0.74) | 3.62(0.79) | 5.115 | < 0.001 |
| PUCGb (score rang: 1–5) | 3.85(0.75) | 4.00(0.70) | 3.72(0.78) | 4.911 | < 0.001 |
| PCCGc (score rang: 1–5) | 3.70(0.82) | 3.85(0.79) | 3.58(0.83) | 4.288 | < 0.001 |
| PMCGd (score rang: 1–5) | 3.79(0.81) | 3.87(0.68) | 3.72(0.89) | 2.416 | 0.02 |
aPSCG: perceived scientificity of content generated
bPUCG: perceived understandability of content generated
cPCCG: perceived creativity of content generated
dPMCG: perceived misinformation of content generated
Factors associated with the experience of AI tools for creating health education materials
Hierarchical logistic regression revealed that clinical medicine major (OR = 2.096, P < 0.001) and the use of paid AI tools (OR = 2.789, P < 0.001) were strong predictors in Model 1 (Table 3). These odds ratios indicate that the odds of having AI tool experience were 109.6% greater for clinical medicine students than for nonclinical students. Similarly, the odds for users of paid AI tools were 178.9% higher than those for nonpaying users. The addition of core UTAUT constructs in Model 2 significantly improved the model fit. In this model, social influence (OR = 1.268, P = 0.001) and facilitating conditions (OR = 1.561, P < 0.001) were significant positive predictors. While performance expectancy was marginally significant in Model 2 (OR = 1.202, P = 0.04), it did not retain significance in Model 3. Additionally, a lower educational level was significantly associated with higher odds of AI tool use in both Model 2 (OR = 0.742, P = 0.03) and Model 3 (OR = 0.732, P = 0.03), indicating that undergraduate students were more likely to have used AI tools than graduate students. Notably, including content perception variables (PSCG, PUCG, PCCG, and PMCG) in Model 3 did not improve the model. None of these content variables were significant, and earlier predictors remained stable.
Table 3.
Hierarchical logistic regression analysis predicting experience with AI tools for creating health education materials (N = 691)
| Variables | Model 1: Demographics | Model 2: + UTAUT | Model 3: + Content Perception |
|---|---|---|---|
| ORa(95%CI)P value | OR(95%CI)P value | OR(95%CI)P value | |
| Age range | 0.987(0.740–1.316)0.93 | 1.040(0.769–1.405)0.80 | 1.054 (0.779–1.427) 0.73 |
| Education level | 0.816(0.629–1.057)0.12 | 0.742(0.564–0.976)0.03 | 0.732 (0.555–0.965) 0.03 |
| Major | 2.096(1.490–2.947) < 0.001 | 2.032(1.422–2.903) < 0.001 | 1.993 (1.391–2.855) < 0.001 |
| Use paid AI tools | 2.789(1.878–4.147) < 0.001 | 1.916(1.259–2.915) < 0.001 | 1.918 (1.259–2.921) 0.01 |
| AI tools training | 2.289(1.294–4.050)0.04 | 1.551(0.850–2.830)0.15 | 1.569 (0.859–2.866) 0.14 |
| Performance expectancy | 1.202(1.005–1.438)0.04 | 1.186 (0.961–1.463) 0.11 | |
| Effort expectancy | 1.121(0.963–1.306)0.14 | 1.128 (0.965–1.318) 0.13 | |
| Social influences | 1.268(1.104–1.457)0.001 | 1.262 (1.089–1.464) 0.01 | |
| Facilitating conditions | 1.561(1.241–1.963) < 0.001 | 1.559 (1.239–1.961) < 0.001 | |
| PSCGb | 1.090 (0.783–1.517) 0.61 | ||
| PUCGc | 0.994 (0.691–1.428) 0.97 | ||
| PCCGd | 0.942 (0.704–1.262) 0.69 | ||
| PMCGe | 1.101 (0.872–1.389) 0.42 |
aOR: odds ratio
bPSCG: perceived scientificity of content generated
cPUCG: perceived understandability of content generated
dPCCG: perceived creativity of content generated
ePMCG: perceived misinformation of content generated
Comparison of keywords and their frequencies regarding views on the use of AI tools during the creation of health education materials
In this study, a total of 187 responses were obtained for the open-ended questions. Among them, 85 students did not utilize AI tools to assist in the creation of health education materials, whereas 102 students employed AI tools for such assistance. Table 4 presents the keywords and their frequencies in terms of views on the use of AI tools during the creation of health education materials in the two groups. After synonyms were merged and words without substantial meaning were removed, seven keyword categories were ultimately identified.
Table 4.
Comparison of keywords and their frequencies on views regarding the use of AI tools during the health education materials creation(N = 187)
| Keyword Category | Keywords and their frequencies | |
|---|---|---|
| AI creation experience (n = 102) | No experience in AI creation(n = 85) | |
| Accuracy/Authenticity | Accurate(8), Authentic/Real(4), Verify(3), Error(3), Distortion(2) | Authenticity(4), Correctness(3), Rigor(2), Verify(2), Distortion(2), Error(2) |
| Dependency/Attitude | Assistant/Tool(7), Do not depend(6), Moderate(5), Reasonable(4), Own thinking(3) | Dependence(4), Moderate use(3), Reasonable use(3), Cautious(2), Own judgment(2) |
| Training/Education | Learn(4), Courses(3), Training(2) | Training(6), Courses(5), Learn(3),Popularize(2) |
| Professionalism | Professional(5), Scientific(3), Clinical(2) | Professional(4), Scientific(3), Review/Audit(2) |
| Technical Improvement | Enhance(6), Strengthen(4), Intelligent(3), Model(3), Database(2) | Improve(3), Enhance(2), Lagging/Outdated(2) |
| Negative Attitude | Not applicable | Ban(1), Pollution(1), Immature(1) |
| Supportive Attitude | Support(4), Good(4), Convenient(2), Efficient(2) | Support(2), Good(2), Convenient(1) |
Comparative analysis revealed a clear divergence in attitude orientation between the two groups. The group without AI creation experience is more cautious and conservative, with statements such as “AI should be banned to preserve originality”. In contrast, the experienced group is more supportive and rational, noting that “AI is a very convenient and efficient tool, but it still requires further learning and improvement”. Both groups highly value information accuracy and authenticity. Nonusers expressed concerns about “ensuring authenticity”, whereas the experienced group had more practical experience in output inspection, emphasizing that “we still need to check the AI-generated content ourselves for any issues”. With respect to training needs, the inexperienced group advocates more structured guidance, stating, “We hope to see more training sessions on relevant AI technologies”. Conversely, the experienced group emphasized self-improvement and technical practices, such as “master the method of communicating with AI”. Finally, both groups agreed that AI should serve as an assistant tool. The inexperienced group cautioned that “AI should not be expected to completely replace the creative process”, whereas the experienced group emphasized that “AI can only be used as assistance”.
Discussion
Principal findings
This study applied an extended UTAUT model to investigate the factors associated with medical students’ experience in the use of AI tools to create health education materials. Among the 691 respondents, 45.4% had used AI tools to create health education materials, and the practical application rate of AI tools remains high among medical students in Chongqing, China. The results of this study are higher than those reported in another study regarding the actual usage rate of AI tools (17.8%) among Chinese nurses [27]. However, this result is consistent with a cross-sectional study in which 54.3% of medical students reported prior experience using medical AI [26]. An increasing number of studies indicate that artificial intelligence tools have great potential in patient education for various diseases, such as prostate cancer [12], colorectal cancer [28], and breast cancer [29]. Unlike traditional manual updates, artificial intelligence tools can provide the latest information, make professional medical terms easy to understand, and enhance patient engagement [30]. AI tools may be used to rapidly generate lay research summaries, leaflets or other patient education materials [9, 31]. Related studies have shown that artificial intelligence technology significantly improves the accuracy and readability of health education materials, which may be an important reason for medical students to use AI tools [11, 12, 22, 30, 32, 33].
Our findings indicate that a lower educational level, major in clinical medicine, and the use of paid artificial intelligence tools are associated with the use of AI tools for the creation of health education materials. The negative association with education level suggests that advanced students may be more skeptical of AI-generated content. Their rigorous research training fosters critical appraisal skills, leading to greater caution. They may perceive AI tools as lacking the academic nuance and rigor needed. This skepticism could explain the lower adoption rates among graduate students. In contrast, undergraduates may prioritize efficiency over critical evaluation [16, 34]. Students majoring in clinical disciplines may possess a greater ability to recognize the value of AI tools in patient health education [9, 35]. Paid AI tools are correlated with higher usage rates because they offer superior functions and are more likely to produce satisfactory health education materials [36].
Our results showed that the adoption of AI tools for creating health education materials is driven mainly by SI and FC in Model 3 [37], whereas the core perceptions of PE and EE were not significant predictors [38, 39]. This pattern of findings can be understood through the lens of Rogers’ diffusion of innovations theory [40]. In the early stages of technology adoption within a social system, potential users are more influenced by the opinions of their peers and superiors (SI) and the availability of institutional support (FC) than by their own assessments of the tool’s utility or ease of use. Specifically, the significant role of SI indicates that teachers and classmates may play a crucial role in motivating students to integrate AI tools into their health education practices [41]. Schools offer free AI tools and training courses [42], which is conducive to promoting students’ adoption of AI tools [16, 37–39]. The findings suggest that the relevance of UTAUT constructs depends on contextual factors such as the adoption stage and social pressure.
We found that content-perception variables did not significantly predict usage experience. This may be because judgments of content quality are formed after initial use, whereas our cross-sectional design captured intentions prior to adoption. According to the elaboration likelihood model [43], novice users may rely more on peripheral cues—such as SI and FC—than on systematic evaluations of content quality. The strong predictive power of SI and FC in our model supports this interpretation. Methodologically, the use of single-item measures for complex constructs, although conceptually valid, may have limited sensitivity and reliability, potentially obscuring true effects.
With respect to the open-ended question, overall, students who have used AI tools are more inclined to offer rational support [44] and can propose more specific technical suggestions and usage strategies. In contrast, groups that do not use AI tools pay more attention to basic training and usage risks [44]. Both groups of students believe that AI is an assisting tool in the creation of health education materials [45]. However, the group that has used it places more emphasis on the collaborative relationship between humans and AI [46]. They are worried about losing empathy because they overreliance on technology [47].
Limitations
This study has several limitations. First, the generalizability of our findings is limited by the convenience sampling strategy and recruitment from a single institution. Future research should utilize multicenter, randomized sampling to improve representativeness. Second, the self-developed scale requires further validation. The internal consistency of key constructs such as PE and FC was limited, underscoring the need for future work to add more items and provide robust evidence of construct validity through techniques such as confirmatory factor analysis or structural equation modeling. Third, dual-coder consensus enhances methodological rigor, and the qualitative coding process remains inherently interpretative. Fourth, our binary measure of AI tool experience is a limitation, as it lacks granularity. Finally, self-reported data are subject to social desirability bias.
Conclusions
Social influence and facilitating conditions may be more strongly associated with the experience of using AI tools than with perceptions of content quality among medical students. Enhancing facilitating conditions, social support and targeted training may promote more effective use of AI in health education.
Supplementary Information
Supplementary Material 1: Survey questionnaire on medical students’ application of AI tools in the creation of health education materials.
Acknowledgements
We sincerely thank the counselors from all grades of the medical university for their assistance in this online questionnaire survey.
Abbreviations
- AI
Artificial intelligence
- UTAUT
Unified theory of acceptance and use of technology
- PE
Performance expectancy
- EE
Effort expectancy
- SI
Social influence
- FC
Facilitating conditions
- OR
Odds ratio
- PSCG
Perceived scientificity of content generated
- PUCG
Perceived understandability of content generated
- PCCG
Perceived creativity of content generated
- PMCG
Perceived misinformation of content generated
Author’ contributions
Chuanfen Zheng, Yingbin Zhang, and Ji-an Chen conceptualized and designed the study. Lu Lu, Fengju Li, Ting Luo, Honghui Rong, and Ling Zhang devised the methodology and designed the questionnaire. The investigation was conducted by Zhaohan Qin, Sitian Xiong, and Jiashun Zhang. Ting Luo and Enyu Lei were accountable for the data collection and verification. Chuanfen Zheng analysed the data and drafted the main body of the manuscript. Ji-an Chen oversaw the study and provided crucial revisions to the manuscript.
Funding
This research was financially supported by the Science and Technology Communication and Popularization Project of Chongqing, China (Grant No. cstc2023kpzx) and the Science Popularization Project of Chongqing Social Sciences Planning, China (Grant No. 2025KP069).
Data availability
The datasets analysed during the present research are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
The authors state that the study was carried out in compliance with the ethical standards established in the 1964 Declaration of Helsinki and subsequent amendments. Online informed consent was obtained from all participants at the beginning of the survey.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Kulkarni PA, Singh H. Artificial intelligence in clinical diagnosis: Opportunities, Challenges, and hype. JAMA. 2023;330(4):317–8. [DOI] [PubMed] [Google Scholar]
- 2.Zhang C, Xu J, Tang R, Yang J, Wang W, Yu X, Shi S. Novel research and future prospects of artificial intelligence in cancer diagnosis and treatment. J Hematol Oncol. 2023;16(1):114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Nutbeam D, Milat AJ. Artificial intelligence and public health: prospects, hype and challenges. Public Health Res Pract. 2025;35:PU24001. [DOI] [PubMed]
- 4.Aggarwal A, Tam CC, Wu D, Li X, Qiao S. Artificial Intelligence-Based chatbots for promoting health behavioral changes: systematic review. J Med Internet Res. 2023;25:e40789. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Cheng M, Zhang Q, Liang H, Wang Y, Qin J, Gong L, Wang S, Li L, Xiao X. Comparison of artificial intelligence-generated and physician-generated patient education materials on early diabetic kidney disease. Front Endocrinol (Lausanne). 2025;16:1559265. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Pan A, Musheyev D, Bockelman D, Loeb S, Kabarriti AE. Assessment of artificial intelligence chatbot responses to top searched queries about cancer. JAMA Oncol. 2023;9(10):1437–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Gupta N, Khatri K, Malik Y, Lakhani A, Kanwal A, Aggarwal S, Dahuja A. Exploring prospects, hurdles, and road ahead for generative artificial intelligence in orthopedic education and training. BMC Med Educ. 2024;24(1):1544. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Zaretsky J, Kim JM, Baskharoun S, Zhao Y, Austrian J, Aphinyanaphongs Y, Gupta R, Blecker SB, Feldman J. Generative artificial intelligence to transform inpatient discharge summaries to Patient-Friendly Language and format. JAMA Netw Open. 2024;7(3):e240357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Paluszek O, Loeb S. Artificial intelligence and patient education. Curr Opin Urol. 2025;35(3):219–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Pradhan F, Fiedler A, Samson K, Olivera-Martinez M, Manatsathit W, Peeraphatdit T. Artificial intelligence compared with human-derived patient educational materials on cirrhosis. Hepatol Commun. 2024;8(3):e0367. [DOI] [PMC free article] [PubMed]
- 11.Kirchner GJ, Kim RY, Weddle JB, Bible JE. Can artificial intelligence improve the readability of patient education materials? Clin Orthop Relat Res. 2023;481(11):2260–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Gibson D, Jackson S, Shanmugasundaram R, Seth I, Siu A, Ahmadi N, Kam J, Mehan N, Thanigasalam R, Jeffery N, et al. Evaluating the efficacy of ChatGPT as a patient education tool in prostate cancer: multimetric assessment. J Med Internet Res. 2024;26:e55939. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Simms RC. Generative artificial intelligence (AI) literacy in nursing education: A crucial call to action. Nurse Educ Today. 2025;146:106544. [DOI] [PubMed] [Google Scholar]
- 14.Lambert SI, Madi M, Sopka S, Lenes A, Stange H, Buszello CP, Stephan A. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digit Med. 2023;6(1):111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Boscardin CK, Gin B, Golde PB, Hauer KE. ChatGPT and generative artificial intelligence for medical education: potential impact and opportunity. Acad Med. 2024;99(1):22–7. [DOI] [PubMed] [Google Scholar]
- 16.Sami A, Tanveer F, Sajwani K, Kiran N, Javed MA, Ozsahin DU, Muhammad K, Waheed Y. Medical students’ attitudes toward AI in education: perception, effectiveness, and its credibility. BMC Med Educ. 2025;25(1):82. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Duan S, Liu C, Rong T, Zhao Y, Liu B. Integrating AI in medical education: a comprehensive study of medical students’ attitudes, concerns, and behavioral intentions. BMC Med Educ. 2025;25(1):599. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Breitwieser M, Zirknitzer S, Poslusny K, Freude T, Scholsching J, Bodenschatz K, Wagner A, Hergan K, Schaffert M, Metzger R et al. AI in fracture detection: A Cross-Disciplinary analysis of physician acceptance using the UTAUT model. Diagnostics (Basel). 2025;15(16):2117. [DOI] [PMC free article] [PubMed]
- 19.Chen CC, Wu J, Crandall RE. Obstacles to the adoption of radio frequency identification technology in the emergency rooms of hospitals. Int J Electron Healthc. 2007;3(2):193–207. [DOI] [PubMed] [Google Scholar]
- 20.Dwivedi YK, Rana NP, Tamilmani K, Raman R. A meta-analysis based modified unified theory of acceptance and use of technology (meta-UTAUT): a review of emerging literature. Curr Opin Psychol. 2020;36:13–8. [DOI] [PubMed] [Google Scholar]
- 21.Zhou M, Pan Y, Zhang Y, Song X, Zhou Y. Evaluating AI-generated patient education materials for spinal surgeries: comparative analysis of readability and DISCERN quality across ChatGPT and deepseek models. Int J Med Inf. 2025;198:105871. [DOI] [PubMed] [Google Scholar]
- 22.Dağcı M, Dost A, Çam F. Potential impacts of AI-Generated videos on nursing care. Nurse Educ. 2025;50(5):E254–8. [DOI] [PubMed] [Google Scholar]
- 23.Rana MM, Siddiqee MS, Sakib MN, Ahamed MR. Assessing AI adoption in developing country academia: A trust and privacy-augmented UTAUT framework. Heliyon. 2024;10(18):e37569. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q. 2003;27:278–425. [Google Scholar]
- 25.Alvi I. College students’ reception of social networking tools for learning in india: an extended UTAUT model. Smart Learn Environ. 2021;8(1):19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Li Q, Qin Y. AI in medical education: medical student perception, curriculum recommendations and design suggestions. BMC Med Educ. 2023;23(1):852. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Dai Q, Li M, Shi S, Yang M, Wang Z, Liao J, Li Z, Liu Y, Deng J, Tao L. Structural equation modeling for influencing factors on behavioral intention to adopt medical AI among Chinese nurses: a nationwide cross-sectional study. BMC Nurs. 2025;24(1):1084. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Siu AHY, Gibson DP, Chiu C, Kwok A, Irwin M, Christie A, Koh CE, Keshava A, Reece M, Suen M, et al. ChatGPT as a patient education tool in colorectal cancer-An in-depth assessment of efficacy, quality and readability. Colorectal Dis. 2025;27(1):e17267. [DOI] [PubMed] [Google Scholar]
- 29.Wang MJ, Rastegar A, Kung TA. Readability analysis of breast cancer resources shared on X-implications for patient education and the potential of AI. Breast Cancer Res Treat. 2025;214(2):121–30. [DOI] [PMC free article] [PubMed]
- 30.AlSammarraie A, Househ M. The use of large Language models in generating patient education materials: a scoping review. Acta Inf Med. 2025;33(1):4–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Aydin S, Karabacak M, Vlachos V, Margetis K. Large Language models in patient education: a scoping review of applications in medicine. Front Med (Lausanne). 2024;11:1477898. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Chen D, Parsa R, Hope A, Hannon B, Mak E, Eng L, Liu FF, Fallah-Rad N, Heesters AM, Raman S. Physician and artificial intelligence chatbot responses to cancer questions from social media. JAMA Oncol. 2024;10(7):956–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Dihan QA, Brown AD, Chauhan MZ, Alzein AF, Abdelnaem SE, Kelso SD, Rahal DA, Park R, Ashraf M, Azzam A, et al. Leveraging large Language models to improve patient education on dry eye disease. Eye (Lond). 2025;39(6):1115–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Srinivasan M, Venugopal A, Venkatesan L, Kumar R. Navigating the pedagogical landscape: exploring the implications of AI and chatbots in nursing education. JMIR Nurs. 2024;7:e52105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Tao W, Yang J, Qu X. Utilization of, perceptions on, and intention to use AI chatbots among medical students in china: National Cross-Sectional study. JMIR Med Educ. 2024;10:e57132. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Ding N, Chen M, Hu L. Examining the impact of big five personality traits on generation Z designers’ subscription to paid AI drawing tools using SEM and FsQCA. Sci Rep. 2025;15(1):17587. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Kwak Y, Seo YH, Ahn JW. Nursing students’ intent to use AI-based healthcare technology: path analysis using the unified theory of acceptance and use of technology. Nurse Educ Today. 2022;119:105541. [DOI] [PubMed] [Google Scholar]
- 38.Jain R, Garg N, Khera SN. Adoption of AI-Enabled tools in social development organizations in india: an extension of UTAUT model. Front Psychol. 2022;13:893691. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Grosjean J, Dufour F, Benis A, Januel JM, Staccini P, Darmoni SJ. Digital health education for the future: the SaNuRN (Santé Numérique Rouen-Nice) consortium’s journey. JMIR Med Educ. 2024;10:e53997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Rogers EM, Simon S. Diffusion of Innovations. 5th ed. New York: Simon & Schuster; 2003.
- 41.Saatçi G, Korkut S, Ünsal A. The effect of the use of artificial intelligence in the Preparation of patient education materials by nursing students on the understandability, actionability and quality of the material: A randomized controlled trial. Nurse Educ Pract. 2024;81:104186. [DOI] [PubMed] [Google Scholar]
- 42.Hamedani Z, Moradi M, Kalroozi F, Manafi Anari A, Jalalifar E, Ansari A, Aski BH, Nezamzadeh M, Karim B. Evaluation of acceptance, attitude, and knowledge towards artificial intelligence and its application from the point of view of physicians and nurses: A provincial survey study in iran: A cross-sectional descriptive-analytical study. Health Sci Rep. 2023;6(9):e1543. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Petty RE, Cacioppo JT, Schumann D. Central and peripheral routes to advertising effectiveness: the moderating role of involvement. J Consum Res. 1983;10(2):135–46. [Google Scholar]
- 44.Tangadulrat P, Sono S, Tangtrakulwanich B. Using ChatGPT for clinical practice and medical education: Cross-Sectional survey of medical students’ and physicians’ perceptions. JMIR Med Educ. 2023;9:e50658. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Pawelczyk J, Kraus M, Eckl L, Nehrer S, Aurich M, Izadpanah K, Siebenlist S, Rupp MC. Attitude of aspiring orthopaedic surgeons towards artificial intelligence: a multinational cross-sectional survey study. Arch Orthop Trauma Surg. 2024;144(8):3541–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Korteling JEH, van de Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC, Eikelboom AR. Human- versus artificial intelligence. Front Artif Intell. 2021;4:622364. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Rincón EHH, Jimenez D, Aguilar LAC, Flórez JMP, Tapia Á, ER, Peñuela CLJ. Mapping the use of artificial intelligence in medical education: a scoping review. BMC Med Educ. 2025;25(1):526. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplementary Material 1: Survey questionnaire on medical students’ application of AI tools in the creation of health education materials.
Data Availability Statement
The datasets analysed during the present research are available from the corresponding author on reasonable request.
