Skip to main content
BMC Medical Education logoLink to BMC Medical Education
. 2026 Mar 30;26:737. doi: 10.1186/s12909-026-09067-0

AI-assisted academic writing in medical postgraduate education: a cross-sectional study of L2 challenges, affective barriers, and instructional implications

Yinpeng Ren 1,#, Ran Gao 2,#, Shuguang Zhang 1, Hua Kang 1,
PMCID: PMC13154843  PMID: 41913183

Abstract

Second language (L2) academic writing has become increasingly critical for postgraduate students in non-English speaking contexts, particularly in specialized disciplines such as medicine. However, medical postgraduates often struggle with linguistic challenges, emotional barriers, and limited instructional support. With the rise of AI-based writing tools such as ChatGPT and Grammarly, new opportunities have emerged for supporting these learners—but concerns remain regarding cognitive dependence, ethical boundaries, and pedagogical integration. This study investigates the L2 academic writing experiences of 304 medical postgraduates in China, focusing on writing difficulties, emotional responses, feedback from supervisors, and the use of AI-assisted tools. Using a cross-sectional survey design, we analyzed students’ self-reported abilities, affective states, and writing behaviors. Results revealed that while most students had prior experience in English writing, they reported persistent challenges with discourse organization, tone control, and academic style. Writing anxiety, procrastination, and lack of emotional regulation were common. Supervisor feedback was seen as valuable but inconsistently delivered. AI tools such as ChatGPT and Grammarly were widely used for grammar correction and polishing, and generally perceived as helpful, though concerns about over-reliance emerged.

Findings highlight the need for a comprehensive pedagogical framework that integrates L2 writing instruction, affective support, and ethical use of AI to empower domain-specific postgraduate learners.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12909-026-09067-0.

Keywords: Second language writing, Medical education, Writing anxiety, AI writing tools

Introduction

English academic writing has become a critical skill for postgraduate students worldwide, particularly in non-English-speaking countries where scientific communication increasingly demands English proficiency [1]. In the field of medical education, this requirement is especially pronounced. The ability to publish in high-impact, English-language journals is not only an academic expectation but also a key determinant of scholarly advancement, professional recognition, and successful participation in the global medical research community [2, 3]. For medical postgraduates in China, English writing is often a graduation requirement and a core component of their research training.

Despite its importance, English academic writing remains a major challenge for many medical postgraduates. Prior studies have highlighted persistent issues such as inadequate lexical resources, syntactic errors, and difficulties in adopting appropriate academic tone and genre conventions [4, 5]. Beyond linguistic and genre-related challenges, students frequently struggle with writing anxiety, procrastination, and low self-efficacy—especially amid clinical rotations and intense research demands [6, 7]. In response, many students are turning to AI-assisted writing tools—such as ChatGPT, DeepSeek, and Grammarly—to enhance language clarity and reduce writing stress. These technologies offer promising solutions to surface-level language issues and are increasingly integrated into the writing practices of second language (L2) learners [8, 9]. While AI tools may help reduce grammatical errors and improve fluency, their role in shaping students’ writing confidence, emotional engagement, and autonomy remains underexplored [10]. Especially in the context of high-stakes scientific communication, questions arise about ethical use, cognitive dependence, and the impact of AI on academic integrity [11].Importantly, existing research has rarely examined writing difficulties, emotional factors, and AI strategies within a single integrative framework. Most studies isolate either the cognitive-linguistic challenges or the affective responses to writing, without addressing how these may interact with emerging technological aids. Given the increasing reliance on AI tools by L2 medical postgraduates and the lack of pedagogical infrastructure guiding their use, it is imperative to understand not only the patterns of AI adoption but also the emotional and instructional contexts in which these tools are embedded.

Therefore, this study aims to characterize academic English writing experiences among Chinese medical postgraduates using an integrative framework that links cognitive-linguistic challenges, affective responses, supervisory support, and AI-assisted writing practices. Specifically, we addressed the following research questions (RQ1–RQ5, Table S1). By linking pedagogical, emotional, and technological domains, this study seeks to provide empirical evidence for designing more effective academic writing support systems in medical education.

Method

Research design and ethical approval

This was a cross-sectional, questionnaire-based study designed to investigate the academic English writing experiences of Chinese medical postgraduate students. The study explored students’ self-perceived writing abilities, affective responses during the writing process, supervisory feedback patterns, and the use of AI-assisted writing tools. All participants provided informed electronic consent prior to completing the questionnaire. Participation was anonymous and voluntary, and no identifying information (such as names or IP addresses) was collected. Our research adhered to the Declaration of Helsinki and the clinical trial registration number is not applicable. All study protocols were reviewed and approved by the Ethics Committee of Xuanwu Hospital, Capital Medical University (IRB No.KS2024040).

Participants and sampling

The participants in this study were current medical postgraduate students enrolled at Xuanwu Hospital of Capital Medical University and Peking University People’s Hospital. The cohort encompassed a range of professional disciplines, including surgery, internal medicine, medical laboratory science, and medical imaging. Participants were recruited using a convenience sampling strategy, with deliberate attention paid to ensuring diversity in terms of academic year, gender, and educational track. A minimum of 350 questionnaires were planned for distribution, with a target of collecting no fewer than 300 valid responses to ensure adequate statistical power.

Instruments

Data were collected via a self-developed structured questionnaire designed by the research team, based on a review of L2 writing and AI writing literature, as well as consultations with three experts in medical English instruction. The questionnaire consisted of five sections, as follows: (1) Basic information module: Collects basic information such as the gender, age, educational stage, professional direction, and English test scores of the subjects; (2) English Writing Experience and Motivation Module: This includes whether you have participated in the writing of English papers, whether you have published SCI papers, whether you hope to submit to international journals in the future, the degree of your regular writing participation, and the types of common writing tasks. (3) Writing Difficulties and Emotional Experience Module: The Likert five-point scale is used to assess emotional dimensions such as writing anxiety, self-efficacy, and a sense of achievement; At the same time, design ranking questions and multiple-choice questions to explore the main difficulties in writing. (4) Supervisor Feedback and Writing Support Module: Collect the frequency of graduate students receiving writing guidance, the feedback methods, the satisfaction with the feedback content, and the emotional responses triggered by the feedback process. (5) AI Tool Usage Module: This includes the types of AI tools used, the frequency of use, the purpose of use, the attitude towards use, and whether one is willing to participate in AI writing teaching courses.

A pilot study with 10 students was conducted to ensure clarity and usability of the questionnaire items, and minor wording adjustments were made based on their feedback.

Data collection

Data were collected online using a secure web-based survey platform. Participation was anonymous and voluntary. An electronic informed consent form was included at the beginning of the questionnaire. Survey links were distributed via graduate program groups and course communication channels over a four-week period in March–April 2025.

Data analysis

Quantitative data were analyzed using SPSS 26.0. Descriptive statistics (means, standard deviations, frequencies) were used to summarize key variables. To examine group differences (e.g., by writing experience or AI usage frequency), independent-samples t-tests and one-way ANOVA were conducted. If normality assumptions were violated, non-parametric tests such as the Mann–Whitney U test or Kruskal–Wallis H test were used. Open-ended responses were reviewed to support and contextualize quantitative findings.

Result

Participant characteristics

A total of 304 valid responses were collected from medical master’s students at two teaching hospitals in Beijing. The sample was 55.6% female and had a mean age of 24.93 years (SD = 0.98). Participants were mainly from internal medicine (44.1%) and surgery (39.1%), and 60.2% were in the second or third year of training. All respondents had passed CET-6, with 79.6% scoring above 500, indicating an upper-intermediate to advanced level of general English proficiency (Fig. 1).

Fig. 1.

Fig. 1

Participant characteristics (N = 304). (A) Gender; (B) Discipline; (C) Training year; (D) CET-6 score distribution

Academic English writing experience

All participants reported prior experience in academic English writing, and 54.6% had authored or co-authored SCI-indexed English papers. Formal university coursework was the most common source of L2 writing instruction, whereas 22.0% reported being self-taught (Fig. 2A). Students most frequently engaged in initial manuscript drafting and commonly contributed to cover letters, response-to-reviewer letters, language polishing, and reference formatting (Fig. 2B). Notably, AI writing tools were reported more often than reading academic literature as a means of acquiring writing knowledge (Fig. 2C).

Fig. 2.

Fig. 2

Academic English writing engagement and learning resources.(A) Sources of L2 writing instruction; (B) Writing task involvement; (C) Writing knowledge acquisition channels

Writing difficulties and emotional experiences

The primary barrier to completing English manuscripts was insufficient training/experience in L2 academic writing (n = 248, 81.6%), followed by limited time due to academic or clinical workload (n = 181, 59.5%). Regarding supervisory revisions, 241 (79.3%) reported gaining useful knowledge, whereas 34 (11.2%) reported reduced confidence and 14 (4.6%) reported anxiety/frustration due to excessive feedback. During writing, procrastination was most common (n = 127, 41.8%), followed by anxiety affecting sleep (n = 98, 32.2%); 32 (10.5%) reported avoiding English writing due to distress or low confidence. Only 29 (9.5%) reported actively regulating writing-related emotions, while 226 (74.3%) reported no such efforts. On a 5-point difficulty scale, the most challenging skill was adopting appropriate academic tone/style (M = 3.80, SD = 0.74), followed by expressing complex ideas (M = 3.53, SD = 0.71); correct citation was least difficult (M = 2.16, SD = 0.85).

Feedback from supervisors and writing support situation

Only 28 (9.2%) reported receiving frequent supervisory guidance (> 10 times/month). Most received feedback occasionally (185, 60.9% reported 5–10 times/month), while 63 (20.7%) reported 2–4 times/month and 28 (9.2%) reported < 2 times/month. Handwritten comments (61.5%) and face-to-face discussions (49.0%) were the most common feedback modalities, followed by email (30.3%). Overall, feedback was viewed positively: 274 (90.1%) described it as helpful, although 30 (9.9%) reported increased pressure due to strict or excessive corrections. Students most strongly desired submission-related guidance (n = 278, 91.4%), access to model texts/templates (n = 230, 75.6%), and explanations of writing logic/structure (n = 210, 69.1%).

Usage and attitude towards AI writing tools

AI-assisted writing tools were widely adopted. The most frequently used platforms were DeepSeek (n = 213, 70.1%), ChatGPT (n = 185, 60.9%), and Grammarly (n = 119, 39.1%); 60 (19.7%) reported using other tools. Common uses included language polishing (n = 214, 70.4%), grammar/lexical correction (n = 179, 58.9%), translation from Chinese to English (n = 149, 49.0%), and generating summaries or sentence templates (n = 158, 52.0%). Less common uses were outline generation (n = 92, 30.3%) and drafting responses to reviewers (n = 57, 18.8%). Interest in structured AI writing training was high: 251 (82.6%) wanted systematic training, and 166 (54.6%) expressed concerns regarding academic integrity/ethical boundaries; 138 (45.4%) wished to learn how to adapt AI use to different journal styles.

Discussion

Educational implications for medical postgraduate training

This study indicates that many medical postgraduates have prior exposure to academic English writing. All respondents passed CET-6, and more than half reported SCI English publication experience. However, self-assessments and reported difficulties suggest persistent challenges in higher-order academic writing [12]. The most difficult areas were controlling academic tone/style and accurately expressing ideas in English. These difficulties were particularly salient for advanced sections (e.g., Discussion, Abstract, and response-to-reviewer letters), reflecting gaps beyond grammar, such as discourse construction and academic logic.

Writing difficulties were shaped not only by language skills but also by workload, time constraints, limited experience, and negative emotions [13, 14]. Many students reported difficulty sustaining attention on writing amid clinical and research demands. Negative affect such as anxiety, procrastination, insomnia, and frustration was common, yet only a small proportion reported actively regulating these emotions. This pattern suggests a lack of affective support and structured coping strategies in postgraduate training. The growing adoption of AI tools may further complicate this landscape by providing short-term relief while introducing potential over-reliance.

The dual role of supervisors’ writing guidance: support and pressure coexist

This study further reveals that mentor guidance plays an indispensable dual role in the development of students’ writing [15]. Most students perceived supervisory feedback as helpful and reported learning language and structural strategies from it. Nevertheless, some students experienced reduced confidence or heightened pressure when facing extensive revisions. This tension may reflect feedback that focuses on correction but provides limited emotional scaffolding or strategy-oriented guidance. Students also expressed a clear demand for process-oriented support, such as submission experience sharing, writing templates, and explanations of structural logic. This indicates a gap between “language-level feedback” and broader writing mentorship needs.

Therefore, the writing feedback from mentors is not only a key channel for students to improve their language ability, but may also become a potential inducement that affects students’ writing emotions and motivations, suggesting that the future guidance system needs to pay more attention to the emotional effect of feedback language and the construction of personalized support strategies [16].

The dual role of AI-assisted writing: support and ethical challenges

Our findings reveal that AI writing tools are now central to the writing strategies of many medical postgraduates, serving both as cognitive scaffolds and emotional buffers. With the rapid development of artificial intelligence technology, this study found that medical graduate students have widely come into contact with and actively used various AI writing tools, especially ChatGPT, DeepSeek and Grammarly [11]. Students mainly use AI tools to complete tasks such as language polishing, grammar modification, sentence pattern optimization, and summary generation, reflecting that they take AI tools as an effective means to compensate for language deficiencies and improve efficiency.

Students generally perceived AI tools as useful for improving efficiency and polishing language. At the same time, they expressed concerns about reduced autonomy and potential dependence. High interest in formal AI writing training suggests that students lack clear operational guidance and ethical boundaries for appropriate use. Together, these findings highlight the need to position AI as a scaffold within pedagogy rather than a substitute for developing academic writing competence.Therefore, in the future design of writing courses, a module on “Effective and Compliant Use of AI Tools” should be incorporated [17, 18]. While teaching students to enhance their language expression efficiency, it should also help them establish correct writing ethics awareness and ability development paths, avoiding excessive reliance on AI tools.

Limitations

This study has several limitations. First, participants were recruited via convenience sampling from two teaching hospitals, which may limit generalizability. Second, the data were self-reported and may be subject to recall and social desirability bias. Third, the cross-sectional design does not allow causal inference regarding relationships among writing challenges, affective factors, supervisory support, and AI tool use. Future studies should incorporate objective writing-performance measures and longitudinal designs to evaluate how AI use relates to competence development and well-being.

Conclusion

This cross-sectional survey mapped Chinese medical postgraduates’ L2 academic English writing experiences across writing challenges, affective barriers, supervisory support, and AI-assisted writing practices. Despite prior exposure to English writing, many students reported persistent difficulties in higher-order academic discourse, alongside common negative affect such as anxiety and procrastination. AI tools were widely adopted for language polishing and efficiency, yet concerns regarding over-reliance and academic integrity were also evident. These findings highlight the need for an integrated support framework that combines discipline-specific writing instruction, affective scaffolding, and structured guidance on the ethical and effective use of AI tools. Future research should incorporate objective writing-performance measures and longitudinal designs to clarify how AI use relates to competence development and well-being.

Supplementary information

Supplementary Material 1. (27.3KB, docx)
Supplementary Material 3. (15.1KB, docx)

Acknowledgements

This study sincerely thanks all the medical postgraduate students who participated in the research, and hopes that they will have a bright future.

Authors' contributions

YR and RG designed and conducted this survey research and mainly wrote this manuscript. YR and SZ carried out the data analysis. HK supervised the overall research and improved the manuscript.

Funding

This work was supported by the Clinical Project Fund of Xuanwu Hospital, Capital Medical University (NO.LCYJ202313).

Data availability

We are willing to share the data should anyone ask you for it, and are prepared to work with any interested researches on the re-analysis of the data particularly if for a systematic review using participant level data. Please contact Dr. Ren at kakaryp@163.com for approval if necessary.

Declarations

Ethics approval and consent to participate

All study protocols were reviewed and approved by the Ethics Committee of Xuanwu Hospital, Capital Medical University (IRB No.KS2024040). The study was conducted in accordance with the Declaration of Helsinki. Informed electronic consent to participate was obtained from all participants prior to completing the online questionnaire. AI-based language tools were used during the drafting stage to assist with phrasing in English. All AI outputs were reviewed and validated by the authors, and no AI tool was involved in data analysis or interpretation.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Yinpeng Ren and Ran Gao contributed equally to this work.

References

  • 1.Reynolds BL, Zhang X. Medical school students’ preferences for and perceptions of teacher written corrective feedback on english as a second language academic writing: an intrinsic case study. Behav Sci (Basel). 2022;13(1);13. [DOI] [PMC free article] [PubMed]
  • 2.Li Y, C.P.J.J.o.S LW, Casanave. Two first-year students’ strategies for writing from sources: patchwriting or plagiarism? J Second Language Writing. 2012;21(2):165–80.
  • 3.Flowerdew JJ. J.o.E.f.a.p., Scholarly writers who use English as an additional language: what can Goffman’s stigma tell us?J Eng Academic Purposes. 2008;7(2):77–86.
  • 4.Rasool U, et al. Pre-service EFL teacher’s perceptions of foreign language writing anxiety and some associated factors. Heliyon. 2023;9(2):e13405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Sa’adah N, Ali FJE. Writing anxiety in english academic writing: a case study of EFL students’perspectives. ETERNAL (English, Teaching, Learning, and Research Journal ). 2022;8(1):18–33.
  • 6.Wu C, Zhang YW, Li AW. Peer feedback and Chinese medical students’ English academic writing development: a longitudinal intervention study. BMC Med Educ. 2023;23(1):578. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hyland K, Hyland FJLt. Feedback second Lang students’ Writ. 2006;39(2):83–101. [Google Scholar]
  • 8.Song C, Song YJFiP. Enhancing academic writing skills and motivation: assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students. Frontiers Psychol. 2023;14:1260843. [DOI] [PMC free article] [PubMed]
  • 9.Kim J et al. Exploring students’ perspectives on generative AI-assisted academic writing. Educ Information Technol. 2025;30(1):1265–300.
  • 10.Baskara F, Learning. Integrating ChatGPT into EFL writing instruction: Benefits and challenges. Intern J Educ Learning. 2023;5(1):44–55.
  • 11.Khojasteh L, Kafipour R, Pakdel F, Mukundan J. Empowering medical students with AI writing co-pilots: design and validation of AI self-assessment toolkit. BMC Med Educ. 2025;25(1):159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Slomp DHJAW. Challenges in assessing the development of writing ability: Theories. constructs methods. Assessing Writing. 2012;17(2):81–91.
  • 13.Rizwan M. and A.J.J.h.w.i.c.I. Naas, Factors affecting undergraduates’ difficulties in writing thesis. Intern J Res Publication Rev. 2022;2582:7421.
  • 14.Jani JS, M.S.J.J.o.S WE, Mellinger. Beyond writing to learn: Factors influencing students’ writing outcomes. J Social Work Educ. 2015;51(1):136–52.
  • 15.Zhao J, X.J.T.A.-P.E R, Zheng. Mentorship and academic writing: experiences of L2 english teachers in Chinese universities. Asia-Pacific Educ Res. 2025:1–10.
  • 16.Dikilitaş K, Mumford SEJEJ. Supporting Writ up teacher research: Peer mentor roles. 2016;70(4):371–81. [Google Scholar]
  • 17.Yun G, Lee KM, H.H.J.J.o.E CR, Choi. Empower student learn through artif intelligence. Bibliometric Anal. 2025;62(8):2042–75. [Google Scholar]
  • 18.Alaa A et al. Empowering future physicians: design, implementation, and evaluation of an artificial intelligence course for undergraduate medical students. J Health Professions Educ Innov. 2024;1(4):17–29.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material 1. (27.3KB, docx)
Supplementary Material 3. (15.1KB, docx)

Data Availability Statement

We are willing to share the data should anyone ask you for it, and are prepared to work with any interested researches on the re-analysis of the data particularly if for a systematic review using participant level data. Please contact Dr. Ren at kakaryp@163.com for approval if necessary.


Articles from BMC Medical Education are provided here courtesy of BMC

RESOURCES