Abstract
Background
Clinical rotations are fundamental for medical students, but Chinese medical schools currently lack locally designed and valid instruments for comprehensive competency evaluation. The purpose of this study was to establish and internally validate a tool for assessing the competency of undergraduate medical students and compare the scores before and after clinical rotations to identify existing problems and improve teaching methods.
Methods
Combining theoretical analysis and empirical grounding, we developed a five-scale competency assessment questionnaire. Senior medical students filled out the questionnaire for pilot testing. Exploratory factor analysis was employed to analyze construct validity. Cronbach’s alpha and Pearson correlation analysis were used for reliability test. Descriptive statistics were utilized to present current status assessments of competency. Independent samples t-tests were conducted to compare scores between pre- and post-clinical rotation status, genders and program types.
Results
The final instrument was composed of five dimensions: Humane accomplishment, Knowledge acquisition, Health systems science, Clinical practice skills and Clinical fundamentals. Reliability analysis reported an overall Cronbach’s alpha > 0.92. By current status analysis, we identified humane accomplishment and health systems science as two competency deficiencies during clinical rotations.
Conclusions
The resulting competency assessment questionnaire is a trustworthy instrument. Humane accomplishment and health systems science competencies were identified as two competency deficiencies in undergraduate medical education. The relevant teaching reforms and competency enhancement are needed.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12909-025-07735-1.
Keywords: Medical education research, Competency-based learning, Clinical education
Background
To adapt to the new changes in societal health demands, the global medical education paradigm is gradually shifting from a knowledge-based and process-oriented model to a competency-based approach. In 2010, The Lancet published a report on the outlook of medical education in the 21 st century [1], which advocate for medical schools to embark on competency-based reforms in medical talents cultivation. Competency is a relatively young term in pedagogy which has been a topic of debate since the 1950 s [2]. Weinert [3] defined competency as the set of cognitive abilities and skills that can be learned by individuals to solve problems. Currently, there is no globally unified standard for the competency of clinical physicians. Instead, each country is developing competency models tailored to its national context. The Accreditation Council for Graduate Medical Education (ACGME) in the United States constructed a six-dimensional competency model for all professional physicians: Patient Care, Medical Knowledge, Systems-based Practice, Practice-based Learning and Improvement, Professionalism, and Interpersonal Communication Skills [4]. The CanMEDS physician competency framework was proposed by the Royal College of Physicians and Surgeons of Canada. It defined eight roles of a physician: Medical Expert, Communicator, Collaborator, Manager, Health Advocate, Scholar, Professionals [5]. These requirements were designed to ensure that professional physicians will develop the knowledge, skills, and attitudes required to enter practice and further development.
China has gradually established a characteristic clinical medical education system, with 5 years of undergraduate medical education as the main body and 8-year program as the exploration, though the 8-year track is only offered by select medical schools that meet the Ministry of Education’s qualification criteria [6]. Under China’s 3-tier medical degree system (bachelor, master and doctor of medicine), progression typically requires completion of the prior degree. The 8-year program, however, admits senior high school graduates directly and integrates training to award a doctor of medicine upon completion [7]. The 5-year curriculum focuses on basic and clinical sciences over four years, followed by clinical rotations in the fifth year, leading to a bachelor’s degree. The 8-year program includes five years of bilingual (Chinese-English) modular basic and clinical courses, a sixth year of systematic clinical rotations, and the final two years dedicated to scientific research. Upon meeting the degree requirements, students will be granted bachelor and doctor degree [8].
Clinical rotations are fundamental for medical students, but Chinese medical schools currently lack locally designed and valid instruments for comprehensive competency evaluation. In graduate medical education, residents have employed validated questionnaires to evaluate their rotation workload, learning environment, and stress levels [9–12]. Other studies have also developed measurement tools to assess residents’ performance in radiation oncology [13], general surgery [14], pathology [15]. In China, clinical department rotations serve as an integral part for undergraduate medical students to transition from theoretical learning to clinical practice. However, existing research findings and specialized tools about clinical rotations are less relevant to undergraduate medical students.
Therefore, we aim to develop an effective tool to evaluate competency of undergraduate medical students and compare the competencies before and after clinical rotations to identify existing problems and improve teaching methods.
Methods
Instrument development
Step 1. Theoretical and empirical grounding
To identify core domains in this area, we undertook literature review and focus group discussion (FGD) as well as individual interviews with medical students and teaching faculties domestic and international medical education standards and competency criteria. By combining modern competency theories with the method of constructing classical competency models, we utilized text analysis method to extract key terms and summarized to key assumptions. We conducted six individual interviews and three semi-structured FGD to guarantee content validity. Participation in the interviews was voluntary and anonymous. Each focus group consisted of three medical school faculty members with extensive teaching experience and face-to-face interviews were conducted with six (two third-grade, two fourth-grade, two fifth-grade) medical students. Our main purpose was to compare the competency differences between high performers and ordinary medical students (“high performers” were defined as students who ranked in the top 10% of their grade based on a comprehensive competency assessment score of current academic year) through interview text analysis, so as to obtain support for establishing the competency model of medical graduates. Therefore, in FGD, we asked subjects to discuss about the differences between excellent and ordinary clinical medicine graduates. During the personal interview, the interviewed students were asked to describe in detail three successful events and three failure events in their work. We asked respondents to recall specific incidents in their undergraduate medical education, including academic learning, clinical clerkships, and doctor-patient communication. Specifically, we inquired about: Three successful events where you overcame challenges through personal effort, achieving or exceeding expected standards in academic or clinical settings; Three failure events where you encountered setbacks that prevented you from meeting goals in similar contexts.
After the interview, the recording was transcribed, and then the text data was used for qualitative content analysis. Finally, the behaviors and competency characteristics of the interviewees in the described events were extracted. The interview results of excellent group and ordinary group were compared and statistically analyzed. The major presumptions drawn from theoretical findings were substantially supported by the themes that extracted in the qualitative data. Researchers MZ Yang and C Yuan operationalized the results into measurable elements. The final model was characterized by five sub-themes: Humane accomplishment, Knowledge acquisition, Health systems science, Clinical practice skills, Clinical fundamentals.
Step 2. Item development
Competence assessment tools previously developed for medical education [4, 5, 16, 17, 17] were used for reference of the 32 initial items we formulated. Participants of the previous qualitative analysis stage assessed questions’ validity to make sure the items accurately reflected earlier discussions. They rated the importance of items on a five-point Likert scale and made discussions about removal or restatement of them. Student feedback led to removal of overlapped items and minor modification of headings in the questionnaire. A 24-item questionnaire was finally made using a five-point Likert response scale ranging from “very unimportant” to “very important’’ (Importance Assessment) or “performance deficient” to “performance excellent” (Current Status Assessment). The middle position was labeled neutral [18].
Step 3. Internal validation
The final online questionnaire includes two parts. Part one is about demographic characteristics including grade, major and gender of the respondents, and the other is the 24-item competency assessment questionnaire (CAQ). In the questionnaire instructions, we informed participants about the study objectives and procedures, emphasizing voluntariness of participation and confidentiality of responses. All responses were anonymous, and did not involve any identifiable information. The questionnaire was distributed to senior medical students in our medical school. All questionnaires were issued and collected by Wenjuanxing, a popular online survey platform in China. Researchers MZ Yang and C Yuan sent the link to the official online groups of target grades.
Quality control
We implemented IP address restrictions in the online questionnaire to prevent replicate data from the same participant. A duplicate question was set in the questionnaire to test validity of the responses. We established five criteria to identify careless responding [19]: response time < 1 min; deviation of more than 1 point in the importance or current status assessments of duplicate questions; lengthy strings of invariant responses ≥ 10; systematic pattern in the responses across the entire questionnaire, such as 1, 2, 3, 1, 2, 3; complete consistency between importance and current status assessments. Reponses conformed to any of the above criteria were considered invalid. Participants involved in the item development and model validation were from the same cohort. To minimize response bias, students who contributed to item development did not participate in the subsequent questionnaire survey.
Statistical analysis
IBM SPSS Statistics V27 was used for descriptive and inferential statistics. Exploratory factor analysis (EFA) with principal component analysis (PCA) and varimax rotation was employed to analyze the results of the Importance Assessment, exploring relationships among items and assessing the construct validity of the questionnaire [20]. The threshold used for factor loading selection in the PCA was set at 0.5. Reliability was determined based on Cronbach’s alpha and Pearson correlation analysis [20]. Descriptive statistics were used to present specific results of Current Status Assessments. Students who enrolled in the 5-year and 8-year programs in 2018, and those who enrolled in the 5-year program in 2019, were categorized as post-clinical rotation participants. Others were classified as pre-clinical rotation participants. Independent samples t-tests were conducted to compare scores between pre- and post-clinical rotation status, genders and program types [21]. A P value less than 0.05 was considered statistically significant. For duplicate question, the answer given at the first response is considered the final answer for statistical analysis of that item.
Results
We distributed the questionnaire link to 818 students via online groups, received 148 responses, removed 37 careless responses, and ultimately collected 111 valid responses (response rate = 13.6%). Among these 111 participants, there were 54 males and 57 females (48.6% and 51.4%, respectively). There were 84 participants from the 5-year program and 27 from the 8-year program (75.7% and 24.3%, respectively). Of these participants, 36 were in the pre-clinical rotation stage, and 75 were in the post-clinical rotation stage.
Construct validity
For results of importance assessment, the Kaiser Meyer Olkin (KMO) test and Bartlett’s test of sphericity indicated that factor analysis was appropriate for this data set (Table 1).
Table 1.
KMO and bartlett’s test
| Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy | 0.881 | |
|---|---|---|
| Bartlett’s Test of Sphericity | Approx. Chi-Square | 1534.969 |
| Degree of Freedom | 276 | |
| Significance | <0.001 |
Five factors with an initial eigenvalue > 1 were identified by EFA, which accounted for >65% of the total variance (Supplementary Material 1, Table S1). According to the slope and inflection point of the scree plot (Fig. 1), five factors were extracted. Overall, the factor analysis results were satisfactory, indicating significant research implications.
Fig. 1.
The Scree Plot demonstrating the number of factors components
An additional table showed this in more detail (see Supplementary Material 1, Table S2), Q17, 18, 19, 22, 20, and 23 exhibited higher loadings on the first factor, primarily explaining these items as “Humane accomplishment”; Q13, 14, and 12 showed higher loadings on the second factor, interpreted as “Knowledge acquisition”; Q5, 4, 3, and 2 loaded heavily on the third factor, explained as “Health systems science”; Q9, 8, 11, and 21 had higher loadings on the fourth factor, interpreted as “Clinical practice skills”; and Q6, 1, and 7 loaded prominently on the fifth factor, explained as “Clinical fundamentals”. F1 Humane accomplishment refers to the set of competencies related to empathy, communication skills, professionalism, and ethical decision-making in the context of patient care. This dimension emphasizes the importance of humanistic qualities that enable medical professionals to establish trust, provide compassionate care, and respect the dignity of patients. F2 Knowledge acquisition encompasses the ability to acquire, integrate, and apply medical knowledge from various sources, including textbooks, research articles, and clinical experiences. It involves critical thinking, evidence-based practice, and the continuous updating of knowledge to stay abreast of the latest medical advancements. F3 Health systems science integrates biomedical literacy, ethical-legal competence, and systems-level understanding. Competencies in this area include the ability to navigate healthcare resources, collaborate with interprofessional teams, and contribute to improving the quality and efficiency of healthcare delivery. F4 Clinical practice skills represent the practical application of medical knowledge and skills in direct patient care settings. This dimension includes procedures, diagnosis, treatment planning, and patient management, emphasizing hands-on clinical proficiency. F5 Clinical fundamentals encompass a broad range of concrete skills and attributes that form the foundation of medical practice. This dimension prioritizes tangible clinical competencies essential for safe and standardized patient care.
Due to low factor loadings (< 0.5), Q24 and 16 were excluded from F1, Q10 from F4, and Q15 from F5. Upon reviewing, Q24, 16, and 15 were considered vague and generalized, thus agreed for exclusion. Q10 represented a critical clinical skill, but undergraduate education typically emphasizes disease diagnosis and treatment rather than comprehensive understanding of disease prevention and chronic disease management, particularly during classroom and clinical training. Therefore, despite its lower factor loading, Q10 was included in F4 due to its significant relevance in clinical practice.
Reliability
The internal consistency of the questionnaire was interpreted as good. Reliability analysis reported an overall Cronbach’s alpha > 0.92 for the 21-item CAQ. Cronbach’s alpha of the five dimensions ranged between 0.746 and 0.892 (Table 2). Pearson correlation analysis revealed significant correlations among dimensions of the questionnaire (see Supplementary Material 1, Table S3 and S4).
Table 2.
Cronbach’s alpha of 21-item CAQ
| Dimension | Item | Importance -Cronbach alpha | Current Status-Cronbach alpha |
|---|---|---|---|
| Overall | 21 | 0.926 | 0.933 |
| F1 Humane accomplishment | 6 | 0.892 | 0.892 |
| F2 Knowledge acquisition | 3 | 0.819 | 0.746 |
| F3 Health systems science | 4 | 0.788 | 0.800 |
| F4 Clinical practice skills | 5 | 0.819 | 0.880 |
| F5 Clinical fundamentals | 3 | 0.797 | 0.808 |
Internal validation results
Competence assessment scores were calculated based on different dimensional indices (Excellent: 4.5-5 points, including 4.5 points; Good: 3.5–4.5 points, including 3.5 points; Qualified: 2.5–3.5 points, including 2.5 points; Unqualified: 1-2.5 points). Overall, among the five dimensions, three dimensions (F1: Humane accomplishment, F4: Clinical practice skills, F5: Clinical fundamentals) achieved good levels, while two dimensions (F2: Knowledge acquisition, F3: Health systems science) achieved qualified levels. The highest score was in F5 (3.86 ± 0.66), and the lowest score was in F3 (3.31 ± 0.71). Specific scores for different items within each dimension are detailed in Table 3.
Table 3.
Current status assessment scores of each item in 21-item CAQ
| Dimension | Mean ± SD | Item | Mean ± SD |
|---|---|---|---|
| F1 Humane accomplishment | 3.54 ± 0.79 | Q17 | 3.58 ± 0.987 |
| Q18 | 3.32 ± 1.053 | ||
| Q19 | 3.18 ± 1.037 | ||
| Q20 | 3.64 ± 0.922 | ||
| Q22 | 4.02 ± 0.904 | ||
| Q23 | 3.49 ± 0.962 | ||
| F2 Knowledge acquisition | 3.46 ± 0.71 | Q12 | 3.52 ± 0.872 |
| Q13 | 3.3 ± 0.87 | ||
| Q14 | 3.55 ± 0.861 | ||
| F3 Health systems science | 3.31 ± 0.71 | Q2 | 3.54 ± 0.784 |
| Q3 | 3.29 ± 0.802 | ||
| Q4 | 3.33 ± 1.003 | ||
| Q5 | 3.08 ± 0.992 | ||
| F4 Clinical practice skills | 3.52 ± 0.74 | Q8 | 3.44 ± 0.931 |
| Q9 | 3.6 ± 0.966 | ||
| Q10 | 3.4 ± 0.917 | ||
| Q11 | 3.27 ± 0.934 | ||
| Q21 | 3.86 ± 0.769 | ||
| F5 Clinical fundamentals | 3.86 ± 0.66 | Q1 | 3.75 ± 0.732 |
| Q6 | 3.98 ± 0.798 | ||
| Q7 | 3.86 ± 0.78 |
Table 4 shows the competency scores of various dimensions before and after rotations. There was a significant difference only in the F1: Humane accomplishment score, where self-assessment scores decreased after rotations compared to before. Overall, students perceived their ability in F3: Health systems science to be the weakest both before and after rotations. Comparative analysis was conducted across genders, study programs, and three academic years. No significant differences were found by gender or study program. However, significant differences were observed among academic years in F1: Humane accomplishment, F4: Clinical practice skills, and overall scores. Compared to 2018 and 2020 cohorts, the 2019 cohort rated their competency dimensions lower, particularly in F1 and F4. We also noted that across three academic years, students perceived their group’s ability in F3 Health systems science as the weakest again. Specifically, students from 2018 to 2019 rated themselves highest in F5 Clinical fundamentals, whereas those from 2020 felt most proficient in the F1 Humane accomplishment dimension.
Table 4.
Independent samples t-test for scores pre and post clinical rotations
| Pre rotation | Post rotation | T value | P value | |
|---|---|---|---|---|
| F1 Humane accomplishment | 3.76 ± 0.82 | 3.43 ± 0.75 | 2.096 | 0.038 |
| F2 Knowledge acquisition | 3.52 ± 0.72 | 3.43 ± 0.70 | 0.640 | 0.524 |
| F3 Health systems science | 3.31 ± 0.88 | 3.31 ± 0.62 | 0.015 | 0.988 |
| F4 Clinical practice skills | 3.66 ± 0.81 | 3.45 ± 0.71 | 1.437 | 0.154 |
| F5 Clinical fundamentals | 3.84 ± 0.74 | 3.88 ± 0.61 | −0.247 | 0.805 |
| Overall | 3.63 ± 0.69 | 3.47 ± 0.54 | 1.295 | 0.198 |
Discussion
Clinical experience is fundamental to all stages of medical education, but there are strong imperatives to optimize the mode of teaching [22]. This is a study on competency assessment before and after clinical rotations in undergraduate medical education. The CAQ questionnaire we developed represents a five-dimensional model with strong internal consistency and reliability, explaining competency levels in five key areas. Our findings showed that senior medical undergraduate students’ competency level ranged between qualified and good, with F5 Clinical fundamentals rated the highest and F3 Health systems science ability the lowest. Students after clinical rotations perceived a deficiency in their competency level in F1 Humane accomplishment, and their competency level in F3 Health systems science had no improvement. Gender and academic program did not significantly influence competency levels, although differences existed among students of different academic years. These internal findings revealed that medical students showed deficiencies in their humanistic qualities during clinical rotations, highlighting a lack of training in non-clinical professional competencies in both classroom teaching and clinical rotations.
The Humane accomplishment module of this study primarily assessed medical students’ communication skills (including doctor-patient and interprofessional communication), social and humanistic knowledge, and societal responsibility. Interestingly, clinical rotations are intended to provide medical students with opportunities to interact with patients and medical team [23], yet students after rotations perceived a decline in these abilities. Our findings are consistent with some previous studies. Hook et al. [24] observed longitudinally over four years of medical school and found that pre-clinical medical students performed better on measures of interpersonal communication. An early study by Helfer [25] also suggested that first-year students paid more attention to humanistic care and asked fewer leading questions than fourth-year students during medical history taking. Another hypothesis is “hidden curriculum” [26, 27], which proposed that unethical and unprofessional behaviors is a primary cause of moral decline among medical students [28, 29]. Overworked interns and residents served as negative role models during clinical apprenticeships, making students more likely to adopt such practices in real-world medical practice compared to pre-clinical students without clinical experience in classroom settings. One explanation is that humane accomplishment education often remains isolated as a separate subject, detached from its application context, rather than integrated into core skills teaching [30]. In response to existing issues with bedside teaching, researchers have proposed strategies to promote humanistic education, including patient involvement in discussions, teachers as role models of humanism, preserving learner autonomy and direct observation and feedback of learners at the bedside [31, 32]. We strongly recommend medical school faculties adopt these measures during clinical internships to enhance the cultivation of humanistic qualities of medical students.
The Health systems science module of this study mainly focused on non-clinical competency of medical students, such as public health, health policy, health care delivery, and interdisciplinary care. While these skills may be marginalized compared to abilities like history taking and physical examination, they are crucial for ensuring ethical medical practices and adapting to societal needs [33]. In our medical school, we aim to introduce these concepts to students early on. For instance, we organized pre-internship training sessions for each cohort of students when they started clinical rotations. During these sessions, professors would introduce the latest healthcare laws, regulations, and hospital management systems to them. However, according to the results of this pilot study, our efforts to develop these skills in medical students were inadequate. In China, students usually begin specialized learning from their third year, with limited integration of basic science, clinical science and health systems science [34]. Moreover, during clinical rotations, undergraduate interns lack sufficient opportunities for hands-on practice and comprehensive patient management [35, 36]. Consequently, there is a deficiency in cultivating non-clinical medical practice capabilities. The addition of health systems science is best supported by changes in teaching modalities [37], which allows for the incorporation of health systems science topics related to the basic science and clinical aspects being taught within both classroom teaching and clinical rotations.
Two aspects of our study require caution when interpreting the results. The validation results of this study are internal findings, and due to the homogeneity of the sample source, the external validity of the conclusions may be limited. Future research should conduct external validation across diverse populations and settings to further enhance the generalizability of the questionnaire. Second, we did not collect qualitative data which would help better understand the reasons behind the results. The observed differences in analysis results could only be interpreted through previous research and theories, limiting our understanding of the actual issues in clinical rotation education. Open-ended questions or focus group interviews after the quantitative research would be beneficial in preventing such issues. In addition, the small group size of instrument development may have limited the diversity of faculty viewpoints, particularly regarding regional curriculum variations or interdisciplinary clinical practices. Future studies should recruit larger FGD samples to enhance data richness, especially when integrating multi-institutional faculty.
Conclusions
The resulting competency assessment questionnaire is internally validated to be a trustworthy instrument. Humane accomplishment and health systems science competencies were identified as two competency deficiencies in undergraduate medical education. The relevant teaching reforms and competency enhancement are needed.
Supplementary Information
Acknowledgements
We would like to thank the students participating in this study and West China School of Medicine for giving us the platform.
Abbreviations
- FGD
Focus group discussion
- CAQ
Competency assessment questionnaire
- EFA
Exploratory factor analysis
- PCA
Principal component analysis
- KMO
Kaiser Meyer Olkin
Authors' contributions
Y.M.: Investigation; conceptualization; writing—original draft; writing—review and editing; project administration; methodology; formal analysis. Y.C.: Conceptualization; investigation; writing—review and editing; software; methodology; supervision; formal analysis. J.Z.: Supervision; project administration; writing—review and editing; conceptualization. All authors reviewed the manuscript.
Funding
This work was supported by the Postdoctoral Fellowship Program (Grade C) of China Postdoctoral Science Foundation under Grant (no.GZC20231828).
Data availability
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
This study was approved by the ethics committee of West China Hospital and all methods were carried out in accordance with the Declaration of Helsinki. Informed consents were obtained from all subjects participating in the study and they were aware of confidentiality, anonymity and data security.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Mengzhu Yang and Chi Yuan have contributed equally to this work and share the first authorship.
References
- 1.Frenk J, Chen L, Bhutta ZA, Cohen J, Crisp N, Evans T, et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet (London England). 2010;376(9756):1923–58. [DOI] [PubMed] [Google Scholar]
- 2.Kiesler N. Approaching the concept of competency. In: Kiesler N, editor. Modeling programming competency: A qualitative analysis. Cham: Springer International Publishing; 2024. pp. 17–36. [Google Scholar]
- 3.Weinert FE. Concept of competence: A conceptual clarification. Defining and selecting key competencies. Ashland, OH, US: Hogrefe & Huber; 2001. pp. 45–65. [Google Scholar]
- 4.Education ACfGM. ACGME common program requirements. 2017. [DOI] [PMC free article] [PubMed]
- 5.Frank J. A history of CanMEDS - chapter from Royal College of Physicians of Canada 75th Anniversary history2004.
- 6.Schwarz MR, Wojtczak A, Zhou T. Medical education in China’s leading medical schools. Med Teach. 2004;26(3):215–22. [DOI] [PubMed] [Google Scholar]
- 7.Hsieh C-R, Tang C. The multi-tiered medical education system and its influence on the health care market-China’s flexner report. Hum Resour Health. 2019;17(1):50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Liu X, Feng J, Liu C, Chu R, Lv M, Zhong N, et al. Medical education systems in China: development, status, and evaluation. Acad Med. 2023;98(1):43–9. [DOI] [PubMed] [Google Scholar]
- 9.Bellini L, Shea JA, Asch, DAJJogim. A new instrument for residency program evaluation. 1997;12(11):707–10. [DOI] [PMC free article] [PubMed]
- 10.Pope BA, Carney PA, Brooks MC, Rice DR, Albright AA, Halvorson SAC. Resident assessment of clinician educators according to core ACGME competencies. J Gen Intern Med. 2024;39(3):377–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Schultz K, McGregor T, Pincock R, Nichols K, Jain S, Pariag J. Discrepancies between preceptor and resident performance assessment: using an electronic formative assessment tool to improve residents’ Self-Assessment skills. Acad Medicine: J Association Am Med Colleges. 2022;97(5):669–73. [DOI] [PubMed] [Google Scholar]
- 12.Seelig CB, DuPre CT, Adelman HMJSmj. Development and validation of a scaled questionnaire for evaluation of residency programs. 1995;88(7):745–50. [DOI] [PubMed]
- 13.Nissen C, Ying J, Kalantari F, Patel M, Prabhu AV, Kesaria A, et al. A prospective study measuring resident and faculty contour concordance: A potential tool for quantitative assessment of residents’ performance in contouring and target delineation in radiation oncology residency. J Am Coll Radiol. 2024;21(3):464–72. [DOI] [PubMed] [Google Scholar]
- 14.Kirton OC, Antonetti M, Morejon O, Dobkin E, Angelica MD, Reilly PJ, et al. Measuring service-specific performance and educational value within a general surgery residency: the power of a prospective, anonymous, Web-based rotation evaluation system in the optimization of resident satisfaction. Surgery. 2001;130(2):289–95. [DOI] [PubMed] [Google Scholar]
- 15.Felicelli C, Gama AP, Chornenkyy Y, Maniar K, Blanco LZ, Novo JE. A novel 6-day cycle surgical pathology rotation improves resident satisfaction and maintains accreditation Council for graduate medical education (ACGME) milestone performance. Acad Pathol. 2023;10(3):100088. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287(2):226–35. [DOI] [PubMed] [Google Scholar]
- 17.Sun Baozhi L, Jianguo WQ. Construction and application of competency model for Chinese clinicians. Construction and application of competency model for Chinese clinicians; 2015.
- 18.Health measurement scales: a practical guide to their development and use (5th edition). Aust N Z J Public Health. 2016;40(3):294-5. [DOI] [PubMed]
- 19.Ward MK, Meade AW. Dealing with careless responding in survey data: prevention, identification, and recommended best practices. Annu Rev Psychol. 2023;74:577–96. [DOI] [PubMed] [Google Scholar]
- 20.Strand P, Sjöborg K, Stalmeijer R, Wichmann-Hansen G, Jakobsson U, Edgren G. Development and psychometric evaluation of the undergraduate clinical education environment measure (UCEEM). Med Teach. 2013;35(12):1014–26. [DOI] [PubMed] [Google Scholar]
- 21.Kim H, Lee K, Lee YH, Park Y, Park Y, Yu Y, et al. The effectiveness of a mobile Phone-Based physical activity program for treating depression, stress, psychological Well-Being, and quality of life among adults: quantitative study. JMIR Mhealth Uhealth. 2023;11:e46286. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Conn JJ, Lake FR, McColl GJ, Bilszta JLC, Woodward-Kron R. Clinical teaching and learning: from theory and research to application. Med J Aust. 2012;196(8):527. [DOI] [PubMed] [Google Scholar]
- 23.Spencer JJB. Learning and teaching in the clinical environment. 2003;326(7389):591–4. [DOI] [PMC free article] [PubMed]
- 24.Hook KM, Pfeiffer CAJM. Impact of a new curriculum on medical students’ interpersonal and interviewing skills. 2007;41(2):154–9. [DOI] [PubMed]
- 25.Helfer REJP. An objective comparison of the pediatric interviewing skills of freshman and senior medical students. 1970;45(4):623–7. [PubMed]
- 26.Haidet P, Kelly PA, Chou C. Characterizing the patient-centeredness of hidden curricula in medical schools: development and validation of a new measure. Acad Medicine: J Association Am Med Colleges. 2005;80(1):44–50. [DOI] [PubMed] [Google Scholar]
- 27.Alimoglu MK, Alparslan D, Daloglu M, Mamakli S, Ozgonul L. Does clinical training period support patient-centeredness perceptions of medical students? Med Educ Online. 2019;24(1):1603525. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Baldwin D Jr, Daugherty SR, Rowley BDJAM. Unethical and unprofessional conduct observed by residents during their first year of training. 1998;73(11):1195–200. [DOI] [PubMed]
- 29.Terndrup, CJJor. health. A student’s perspective on medical ethics education. 2013;52:1073-8. [DOI] [PubMed]
- 30.Silverman, JJPe. counseling. Teaching clinical communication: a mainstream activity or just a minority sport? 2009;76(3):361-7. [DOI] [PubMed]
- 31.Ramani S, Orlander JD. Human dimensions in bedside teaching: focus group discussions of teachers and learners. Teach Learn Med. 2013;25(4):312–8. [DOI] [PubMed] [Google Scholar]
- 32.Weissmann PF, Branch WT, Gracey CF, Haidet P, Frankel RM. Role modeling humanistic behavior: learning bedside manner from the experts. Acad Medicine: J Association Am Med Colleges. 2006;81(7):661–7. [DOI] [PubMed] [Google Scholar]
- 33.Mehta SH, Shah NR. Integrating public and population health into medical education curricula: opportunities and challenges for reform. Acad Med. 2023;98(12). [DOI] [PubMed]
- 34.Liu Li. Analysis of public health education for undergraduate clinical medicine students oriented by post competency. Chin Continuing Med Educ. 2023;15(02):35–8. [Google Scholar]
- 35.King DB, Dickinson JA, Boulton M-R, Toumpas CJBE-BM. Clinical skills textbooks fail evidence-based examination. 2005;10(5):131–2. [PubMed]
- 36.Blaschke A-L, Rubisch HPK, Schindler A-K, Berberat PO, Gartmeier M. How is modern bedside teaching structured? A video analysis of learning content, social and Spatial structures. BMC Med Educ. 2022;22(1):790. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Borkan JM, Hammoud MM, Nelson E, Oyler J, Lawson L, Starr SR et al. Health systems science education: the new post-Flexner professionalism for the 21st century. 2021;43(sup2):S25-S31. [DOI] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

