Skip to main content
BMC Medical Education logoLink to BMC Medical Education
. 2026 Jan 29;26:329. doi: 10.1186/s12909-026-08669-y

Enhancing clinical training evaluation: leveraging artificial intelligence algorithms for effective online practicum assessment

Yu Zhu 1,, Cheng-Mao Zhou 1,
PMCID: PMC12924455  PMID: 41612345

Abstract

Objective

This framework shall be founded upon the amalgamation of multiple AI algorithms and course data, with the overarching objective of furnishing medical students with real-time feedback and personalized learning support.

Methods

A variety of artificial intelligence algorithms are used.

Results

The outcomes derived from the test group analysis of AI algorithms for predicting satisfaction with online clinical apprenticeships were as follows: in terms of accuracy, all the algorithms achieved the accuracy of over 70% except for LGBM and GradientBoosting; in terms of precision, the top five best algorithms were GradientBoosting (0.917), LGBM (0.880), CNN (0.865), LinearSVC (0.851), and CatBoost (0.833); in terms of recall, the top three best algorithms were gnb (0.415), PBNN (0.410), and RNN (0.410); in terms of F1 score, the top four best algorithms were PBNN (0.536), NeuralDecisionTree (0.520), Gnb (0.514), and NeuralDecisionForest (0.509) ; and in terms of AUC, most of the algorithms displayed high performance levels except for RNN. Our web-based platform has been successfully implemented through the utilization of the LinearSVC algorithm. This system is conveniently accessible via a designated web page (https://zhouchengmao-streamlit-app-5-lsvc-a-st-app-lsvc-apsoocpc-gxze7n.streamlit.app/), where users may engage with its functionalities.

Conclusions

When considering the results of both the training and test groups collectively, it was evident that Linear SVC demonstrated favorable performance across multiple evaluation metrics. Therefore, it could be regarded as one of the best-performing algorithms.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12909-026-08669-y.

Keywords: Machine learning, Medical students, Linear SVC, Predicting, Web

Introduction

As a pivotal phase in the career of medical students, clinical apprenticeship assumes a critical role in the transition from theoretical knowledge to practical competence. In light of the swift advancement of technology, the pursuit of online education in the domain of medical instruction has witnessed a notable surge. This inclination has also extended to online clinical apprenticeships, which possess the capacity to replicate authentic scenarios and data, thereby furnishing students with a secure and tailored platform for practical application [1]. By virtue of online clinical apprenticeships, students are afforded the flexibility to learn at their own convenience and from any location, while concurrently gaining access to precise predictions and evaluations, thereby fostering the refinement of their practical skills and personal growth [2].

Furthermore, artificial intelligence (AI) algorithms assume a critical role in medical education and effectively improve the learning and training process of medical students by virtue of simulating genuine clinical scenarios and offering personalized educational resources [3]. Specifically, the multifaceted contributions of AI in medical education encompass the following aspects. (1) Real-time feedback and individualized learning: AI technology facilitates the provision of instantaneous feedback and tailored learning materials, catering to the learning progress and aptitudes of students [4]. (2) Simulation of clinical scenarios: AI technology proficiently emulates authentic clinical scenarios, thus affording medical students the opportunity to engage in online clinical apprenticeships [5]. (3) Automatic diagnosis and aid decision-making: During analysis of patient data, these algorithms proficiently assist medical students in decision-making, thereby enabling them to diagnose and manage diverse cases with heightened accuracy [68].

Nevertheless, there is no pertinent research pertaining to AI-driven online clinical apprenticeships. Consequently, this study intended to develop a research framework that encompasses an online assessment system tailored for medical students’ clinical apprenticeship. This framework shall be founded upon the amalgamation of multiple AI algorithms and course data, with the overarching objective of furnishing medical students with real-time feedback and personalized learning support.

Methodology

Dataset information

This study comprised a total of 1,777 undergraduate medical students who were enrolled in online clinical clerkships. Inclusion criteria for participants were as follows: (1) participants were undergraduate medical students enrolled in traditional face-to-face clinical clerkships who subsequently transitioned to online clerkships, and (2) all survey document questions were satisfactorily completed. In Japan, undergraduate medical education spans six years, with the initial four or three and a half years dedicated to preclinical education, followed by the final two or two and a half years dedicated to clinical clerkships. Clinical internships afford medical students the opportunity to acquire practical clinical skills. The questionnaire administered to the participants solicited information pertaining to their demographic details. For medical students who had transitioned from face-to-face clinical clerkships to online clinical clerkships, specific queries were posed regarding the efficacy of online clerkships compared to face-to-face clerkships, as well as the frequency of their exposure to various experiences during the completion of the online clinical clerkship. In this study, the ‘efficacy’ of online internship is defined as the improvement of clinical competence (such as knowledge acquisition, skill proficiency) and satisfaction with learning experience obtained by students through online internship. Among them, satisfaction is quantified through Kirkpatrick Model Level 1 Response Assessment. Knowledge assessment is conducted via questionnaires, mainly covering quiz performance, assignment submission, student oral presentations, and evaluation of clinical skills practice.

The investigation focused on assessing the degree of exposure that students had to various factors associated with the effectiveness of online clinical placements. The selection of factors for inclusion in the survey was guided by existing literature and recommendations pertinent to medical education. The factors examined in this study encompassed the following: gender, year in medical school, time per lecture, lecture frequency per week, opportunity to take quizzes, opportunity to submit assignments, opportunity to give oral presentations, opportunity to observe physicians’ practice, opportunity to practice clinical skills, opportunity to participate in interprofessional meetings, opportunity to interact with physicians, and frequency of technical Internet-related problems. To evaluate the effectiveness of online clinical apprenticeships, the satisfaction level of participants was assessed utilizing the Kirkpatrick Model, specifically focusing on Level 1 Response Assessment [9].

This study employed the following steps of AI algorithms to conduct the research:

  1. Data collection: The learning data of medical students, encompassing their learning behaviors, question responses, and assessments of knowledge levels, were collected as detailed in the preceding sections.

  2. Data analysis: AI algorithms were utilized to analyze the learning data and extract insights into students’ learning patterns and proficiency levels. Specifically, algorithms such as CATB (utilized for feature importance) were employed to analyze the data and assign rankings to the significance of each variable.

  3. Model building: (1) Dataset division: The dataset obtained from the medical database was divided into training and testing sets, typically using a predetermined percentage. For instance, 70% of the data were allocated for training the model, while the remaining 30% were utilized to assess the model’s generalization performance. (2) Tool selection: The Python programming language was chosen, along with relevant machine learning and deep learning libraries like scikit-learn and TensorFlow, to construct the AI(Decision Tree, Random Forest, Gradient Boosting Decision Tree(GBDT), XGBoost(XGB), LightGBM(LGBM), CatBoost, Linear SVC(Linear Support Vector Classification), Gaussian Naive Bayes(gnb), Recurrent Neural Network(RNN), Convolutional Neural Network(CNN), PBNN(Probabilistic Bayesian neural network), Neural Decision Tree and Neural Decision Forest) prediction model. Python provided extensive functionality for data processing and model training, while these libraries facilitated the implementation of various algorithms. (3) Algorithm selection: A combination of machine learning and deep learning algorithms was employed to develop accurate and reliable predictive models. (4) Model building and evaluation: The selected algorithms were applied to train and build models using the training dataset. We employed the 5-fold cross-validation technique, where the training dataset was divided into five subsets. In each iteration, one subset was used as the test set to evaluate the model performance, and each subset was sequentially validated in the testing phase. By repeating this process multiple times, the model’s performance was evaluated, yielding balanced and stable results. (5) Data processing: The raw data were processed, typically involving normalization. Standardization methods were applied to ensure consistency among different variables and eliminate potential biases. (6) Missing value handling: In cases of missing values in the data, the SimpleImputer package was employed to handle these missing values.

  4. Learning outcome model evaluation: To evaluate the performance of the predictive model, we used multiple metrics for comprehensive evaluation. For instance, ROC curves were used to assess the performance of the classifiers, accuracy was used to evaluate the accuracy of classification, and precision and recall were used to evaluate the model’s classification effectiveness. Additionally, F1 score was used as a measure of comprehensive performance, combining accuracy and recall.

  5. Continuous improvement and optimization: Based on the learning data, feedback information and evaluation results, the system was continuously improved and optimized.

  6. Building a prediction webpage: Based on the trained model, we used Python and other related technology stacks to build a simple and easy-to-use online clinical clerkship prediction webpage. Through the user interface, medical students could enter their clerkship records and related metrics and get predictions and suggestions for clinical clerkship outcomes. After online operation of the webpage, we would monitor user feedback and data to improve and maintain its functionality as needed.

General statistical methods

The R statistical software was used for data analysis. Count data were expressed in the form of percentages. t-test was used to conduct intergroup comparisons for variables that conformed to a normal distribution, while χ² test was employed for variables that did not conform to a normal distribution. P < 0.05 was considered to indicate a statistically significant difference.

Results

Correlation analysis of variables related to satisfaction with online clinical apprenticeships revealed three key associations: a statistically significant negative correlation (r = -0.170) between opportunity to submit assignments and year in medical school, indicating that satisfaction with assignment submission decreased as students advanced to higher grades; a positive correlation (r = 0.489) between opportunity to practice clinical skills and opportunity to observe physicians’ practice, which was linked to increased student satisfaction; and a positive correlation (r = 0.536) between time per lecture and weekly lecture frequency, associated with higher satisfaction levels. Notably, both clinical skill practice and physician observation opportunities demonstrated direct positive relationships with overall satisfaction (Fig. 1 ).

Fig. 1.

Fig. 1

Correlation between variables

The outcomes of weighted feature engineering conducted through the CATB algorithm indicated that opportunity to interact with physicians, opportunity to observe physicians’ practice, opportunity to participate in interprofessional meetings, opportunity to take quizzes, and opportunity to practice clinical skills were the top five major influencing factors for online clinical apprenticeship satisfaction (Fig. 2).

Fig. 2.

Fig. 2

Variable importance of features

The results obtained from the training group of AI algorithms for predicting satisfaction with online clinical apprenticeships were as follows: in terms of accuracy, all the algorithms achieved the accuracy of over 70% with the exception of LGBM and GradientBoosting; in terms of precision, the best algorithms were GradientBoosting (0.836), LGBM (0.826), RNN (0.776), and CatBoost (0.748); in terms of recall, the top three best algorithms were RNN (0.522), PBNN (0.456), and Gnb (0.376); in terms of F1 score, the best algorithms were RNN (0.624), PBNN (0.526), and Gnb (0.465); and in terms of AUC, all the algorithms exhibited high AUC values except for DecisionTree (Table 1 and Fig. 3).

Table 1.

Forecast results for training group

Model name Accuracy Precision Recall F1 score AUC
Decision Tree 0.707 0.736 0.224 0.343 0.657
Random Forest 0.721 0.699 0.322 0.441 0.762
Gradient Boosting 0.691 0.836 0.12 0.21 0.702
XGB 0.719 0.735 0.28 0.405 0.717
LGBM 0.694 0.826 0.134 0.231 0.706
CatBoost 0.709 0.748 0.224 0.344 0.703
Linear SVC 0.722 0.714 0.311 0.433 0.719
Gnb 0.704 0.608 0.376 0.465 0.705
RNN 0.785 0.776 0.522 0.624 0.835
CNN 0.704 0.656 0.282 0.395 0.703
PBNN 0.719 0.622 0.456 0.526 0.71
Neural Decision Tree 0.723 0.687 0.351 0.464 0.72
Neural Decision Forest 0.724 0.705 0.332 0.451 0.728

Abbreviation: Decision Tree, Random Forest, Gradient Boosting Decision Tree(GBDT), XGBoost(XGB), LightGBM(LGBM), CatBoost, Linear SVC(Linear Support Vector Classification), Gaussian Naive Bayes(gnb), Recurrent Neural Network(RNN), Convolutional Neural Network(CNN), PBNN(Probabilistic Bayesian neural network), Neural Decision Tree and Neural Decision Forest

Fig. 3.

Fig. 3

Different machine learning and deep learning algorithms predict tthe efficacy of online clinical apprenticeships. Abbreviate: Decision Tree, Random Forest, Gradient Boosting Decision Tree(GBDT), XGBoost(XGB), LightGBM(LGBM), CatBoost, Linear SVC(Linear Support Vector Classification), Gaussian Naive Bayes(gnb), Recurrent Neural Network(RNN), Convolutional Neural Network(CNN), PBNN(Probabilistic Bayesian neural network

The outcomes derived from the test group analysis of AI algorithms for predicting satisfaction with online clinical apprenticeships were as follows: in terms of accuracy, all the algorithms achieved the accuracy of over 70% except for LGBM and GradientBoosting; in terms of precision, the top five best algorithms were GradientBoosting (0.917), LGBM (0.880), CNN (0.865), LinearSVC (0.851), and CatBoost (0.833); in terms of recall, the top three best algorithms were gnb (0.415), PBNN (0.410), and RNN (0.410); in terms of F1 score, the top four best algorithms were PBNN (0.536), NeuralDecisionTree (0.520), Gnb (0.514), and NeuralDecisionForest (0.509) ; and in terms of AUC, most of the algorithms displayed high performance levels except for RNN (Table 2 and Fig. 4).

Table 2.

Forecast results for testing group

Model name Accuracy Precision Recall F1 score AUC
Decision Tree 0.704 0.755 0.202 0.319 0.707
Random Forest 0.727 0.753 0.301 0.43 0.742
Gradient Boosting 0.695 0.917 0.12 0.213 0.74
XGB 0.723 0.761 0.279 0.408 0.743
LGBM 0.693 0.88 0.12 0.212 0.744
CatBoost 0.725 0.833 0.246 0.38 0.758
Linear SVC 0.755 0.851 0.344 0.49 0.753
Gnb 0.73 0.673 0.415 0.514 0.752
RNN 0.713 0.625 0.41 0.495 0.687
CNN 0.758 0.865 0.35 0.498 0.725
PBNN 0.757 0.773 0.41 0.536 0.726
Neural Decision Tree 0.751 0.766 0.393 0.52 0.754
Neural Decision Forest 0.751 0.784 0.377 0.509 0.754

Abbreviations: Decision Tree, Random Forest, Gradient Boosting Decision Tree(GBDT), XGBoost(XGB), LightGBM(LGBM), CatBoost, Linear SVC(Linear Support Vector Classification), Gaussian Naive Bayes(gnb), Recurrent Neural Network(RNN), Convolutional Neural Network(CNN), PBNN(Probabilistic Bayesian neural network), Neural Decision Tree and Neural Decision Forest

Fig. 4.

Fig. 4

Different machine learning and deep learning algorithms predict the efficacy of online clinical apprenticeships. Abbreviate: Decision Tree, Random Forest, Gradient Boosting Decision Tree(GBDT), XGBoost(XGB), LightGBM(LGBM), CatBoost, Linear SVC(Linear Support Vector Classification), Gaussian Naive Bayes(gnb), Recurrent Neural Network(RNN), Convolutional Neural Network(CNN), PBNN(Probabilistic Bayesian neural network

When considering the results of both the training and test groups collectively, it was evident that Linear SVC demonstrated favorable performance across multiple evaluation metrics. Therefore, it could be regarded as one of the best-performing algorithms.

Our web-based platform, designed for the purpose of evaluating the satisfaction of participants in online clinical apprenticeships, has been successfully implemented through the utilization of the LinearSVC algorithm. This system is conveniently accessible via a designated web page (https://zhouchengmao-streamlit-app-5-lsvc-a-st-app-lsvc-apsoocpc-gxze7n.streamlit.app/), where users may engage with its functionalities. Upon accessing the web interface, users are prompted to provide the necessary information by means of a data entry form. Subsequently, by selecting the “Prediction” button, users can initiate the satisfaction prediction process. Leveraging the power of the LinearSVC algorithm and its associated model, the system processes the input data to generate a prediction. Following the completion of this prediction, users gain access to the pertinent outcomes of their online clinical practicum satisfaction assessment. It is critical to emphasize that this system is intended solely for informational and personal use, and the ultimate determination of assessment outcomes remains contingent upon the judgment and decision-making of the respective medical education institution or relevant standards.

Discussion

The incorporation of online clinical apprenticeships as an emerging pedagogical approach holds profound significance in the realm of medical education. Its primary objective revolves around bestowing students with interactive experiences and exposure to the practical aspects of medicine, thereby fostering the development of their clinical aptitude and professionalism through the simulation of authentic clinical settings via online platforms [10]. Initially, online clinical apprenticeships serve as a safeguarded and regulated milieu for learning. Furthermore, they offer a broader spectrum of learning prospects [11]. Nonetheless, challenges persist in the realm of online clinical apprenticeships. Foremost among these challenges lies the imperative task of ensuring the authenticity and effectiveness of such online apprenticeships. Additionally, the proliferation of technology and the stability of network connectivity are pivotal for the advancement of clinical online apprenticeships, yet certain geographical regions and student populations encounter impediments in accessing online platforms. The utilization of AI technology for predicting the efficacy of online clinical apprenticeships bears immense significance and is currently undergoing rapid development. The advent of this technology unveils vast prospects and opportunities for both educational endeavors and clinical practice. Our research endeavors to underscore the efficacy of the LinearSVC algorithm in effectively evaluating the efficacy of online clinical apprenticeships.

Furthermore, the outcomes derived from the application of the CATB algorithm in weighted feature engineering have revealed that several key factors exert a significant influence on the satisfaction levels pertaining to online clinical apprenticeships. Notably, these factors encompass opportunity to interact with physicians, opportunity to observe physicians’ practice, opportunity to participate in interprofessional meetings, opportunity to take quizzes, and opportunity to practice clinical skills. Prior investigations have consistently demonstrated the positive impact of active engagement in clinical learning on medical students [12]. Moreover, interactions within the clinical setting have been identified as contributory elements to student satisfaction [13]. However, it is noteworthy that the correlation between the quantity of new inpatients and outpatients encountered during students’ clerkships and their overall satisfaction does not yield statistical significance [14]. Furthermore, studies have indicated that students exhibit higher levels of overall satisfaction and preference for online lectures compared to live lectures [15]. This observation holds true across various educational formats such as lectures and seminars, where online learning is generally acclaimed as a valuable alternative to traditional instructional methods [16]. Participation in seminars has been shown to enhance medical students’ satisfaction with their medical education [17]. Additionally, the administration of feedback quizzes, regardless of whether they are formative or summative, and whether they are conducted inside or outside the classroom, has been demonstrated to enhance learning outcomes, performance, and student satisfaction with the clinical curriculum [18]. Furthermore, factors contributing to lower ratings in clerkships encompass a dearth of opportunities for direct patient care involvement, indistinct goals and objectives, as well as students’ perceptions of inadequate respect [19]. Research has further corroborated that expanding opportunities for students to assume clinical independence significantly enhances satisfaction levels [20]. The findings of our study serve to validate these aforementioned insights.

The positioning of virtual clerkships as a supplement to in-person clerkships is intended for flexible learning, skill pre-training, or remote scenarios, rather than a complete replacement for in-person clerkships. Its advantages (e.g., cross-regional resource sharing) and limitations (e.g., lack of real patient interaction) are explained in the context of medical education needs to clarify its role. This online tool (https://zhouchengmao-streamlit-app-5-lsvc-a-st-app-lsvc-apsoocpc-gxze7n.streamlit.app/) supports the following applications:① Formative feedback: After students input their internship records, the system provides real-time prompts to predict their learning status and guide targeted improvement;② Summative assessment: Institutions can use the prediction results as a reference indicator for evaluating internship performance.

In AI-driven clinical research, model stability and generalizability are core indicators for evaluating clinical value. However, small-sample subgroups (e.g., gender) often experience reduced model performance due to limited sample sizes and heterogeneity, especially in clinical training assessments where reliable evaluation of different learners’ performance is required. Small-sample subgroups may impair stability by reducing statistical power, increasing overfitting risk, and amplifying feature distribution heterogeneity. In future studies, we will enhance model stability by expanding sample sizes, implementing stratified sampling, adopting robust statistical methods, integrating domain expertise, conducting multi-population validation, and ensuring transparent reporting, ultimately improving the reliability and fairness of this AI model in clinical training.

Nevertheless, challenges and limitations persist in utilizing AI for the prediction of satisfaction in online clinical apprenticeships. Firstly, the accuracy of prediction outcomes hinges upon the quality of the data and the efficacy of model training. Insufficient or unrepresentative data samples and inadequate model training can introduce biases into the prediction results. Secondly, it is still a formidable task for AI to comprehensively capture the intricate emotional and psychological factors that influence individual satisfaction. These factors wield considerable influence, yet elude capture by automated systems. Consequently, the integration of AI predictions with expert judgment remains indispensable in the interpretation and utilization of prediction results, so as to ensure a comprehensive and accurate assessment.

In summary, the utilization of AI technology for predicting online clinical apprenticeships holds immense potential in enhancing educational outcomes and satisfaction levels. By means of an intelligent learning process, medical students can gain a deeper understanding of their learning progress and knowledge gaps and then tailor their learning endeavors through the system’s feedback and available resources. Consequently, this approach contributes to enhanced learning effectiveness and clinical competence among medical students, thereby offering robust support for the cultivation of future medical professionals.

Supplementary Information

Supplementary Material 1. (15.4KB, docx)

Acknowledgements

None.

Abbreviations

AI

Artificial Intelligence

ROC

Receiver Operating Characteristic

AUC

Area Under the ROC Curve

LGBM

LightGBM

CNN

Convolutional Neural Network

RNN

Recurrent Neural Network

PBNN

Probabilistic Bayesian Neural Network

GBDT

Gradient Boosting Decision Tree

XGB

XGBoost

SVC

Support Vector Classification

Authors’ contributions

C.M.Z. and Y.Z. wrote the main manuscript text. C.M.Z. and Y.Z. prepared Figs. 1, 2, 3 and 4. C.M.Z. and Y.Z. authors reviewed the manuscript.

Funding

None.

Data availability

Raw data can be obtained by the BioStudies public database.

Declarations

Ethics approval and consent to participate

Publicly accessible databases may be used for secondary analysis without ethical approval and informed consent, as per Article 32 of China’s National Health Education Commission’s Document No. 2023/4, on research regarding human life sciences and medicine.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Yu Zhu, Email: zhuyu07090527@163.com.

Cheng-Mao Zhou, Email: zhouchengmao187@foxmail.com.

References

  • 1.Kasai H, Shikino K, Saito G, et al. Alternative approaches for clinical clerkship during the COVID-19 pandemic: online simulated clinical practice for inpatients and outpatients-A mixed method. BMC Med Educ. 2021;21(1):149. 10.1186/s12909-021-02586-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Glaser K, Pazdernik V, Sackett D, Sheridan V. Effect of a required online graded curriculum in the clerkship years on medical student National standardized examination performance. J Osteopath Med. 2021;121(8):673–85. 10.1515/jom-2020-0298. [DOI] [PubMed] [Google Scholar]
  • 3.Civaner MM, Uncu Y, Bulut F, Chalil EG, Tatli A. Artificial intelligence in medical education: a cross-sectional needs assessment. BMC Med Educ. 2022;22(1):772. 10.1186/s12909-022-03852-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wei X, Sun S, Wu D, Zhou L. Personalized online learning resource recommendation based on artificial intelligence and educational psychology. Front Psychol. 2021;12:767837. 10.3389/fpsyg.2021.767837. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Redinger KE, Greene JD. Virtual emergency medicine clerkship curriculum during the COVID-19 pandemic: Development, Application, and outcomes. West J Emerg Med. 2021;22(3):792–8. 10.5811/westjem.2021.2.48430. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Zhou CM, Wang Y, Xue Q, Yang JJ, Zhu Y. Predicting early postoperative PONV using multiple machine-learning- and deep-learning-algorithms. BMC Med Res Methodol. 2023;23(1):133. 10.1186/s12874-023-01955-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Zhou CM, Wang Y, Yang JJ, Zhu Y. Predicting postoperative gastric cancer prognosis based on inflammatory factors and machine learning technology. BMC Med Inf Decis Mak. 2023;23(1):53. 10.1186/s12911-023-02150-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Zhou CM, Wang Y, Xue Q, Zhu Y. Differentiation of bone metastasis in elderly patients with lung adenocarcinoma using multiple machine learning algorithms. Cancer Control. 2023;30:10732748231167958. 10.1177/10732748231167958. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Colthart I, Bagnall G, Evans A, et al. The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME guide 10. Med Teach. 2008;30(2):124–45. 10.1080/01421590701881699. [DOI] [PubMed] [Google Scholar]
  • 10.Wu SJ, Fan YF, Sun S, Chien CY, Wu YJ. Perceptions of medical students towards and effectiveness of online surgical curriculum: a systematic review. BMC Med Educ. 2021;21(1):571. 10.1186/s12909-021-03014-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Law JK, Thome PA, Lindeman B, Jackson DC, Lidor AO. Student use and perceptions of mobile technology in clinical clerkships - Guidance for curriculum design. Am J Surg. 2018;215(1):196–9. 10.1016/j.amjsurg.2017.01.038. [DOI] [PubMed] [Google Scholar]
  • 12.Hess G, Miles S, Bowker LK. Placement overlap with other students; effects on medical student learning experience. Teach Learn Med. 2022;34(4):368–78. 10.1080/10401334.2021.1946400. [DOI] [PubMed] [Google Scholar]
  • 13.Kippenbrock T, Emory J, Lee P, Odell E, Buron B, Morrison B. A National survey of nurse practitioners’ patient satisfaction outcomes. Nurs Outlook. 2019;67(6):707–12. 10.1016/j.outlook.2019.04.010. [DOI] [PubMed] [Google Scholar]
  • 14.Xu G, Wolfson P, Robeson M, Rodgers JF, Veloski JJ, Brigham TP. Students’ satisfaction and perceptions of attending physicians’ and residents’ teaching role. Am J Surg. 1998;176(1):46–8. 10.1016/s0002-9610(98)00108-1. [DOI] [PubMed] [Google Scholar]
  • 15.Yagi S, Fukuda D, Ise T, et al. Clinical clerkship students’ preferences and satisfaction regarding online lectures during the COVID-19 pandemic. BMC Med Educ. 2022;22(1):43. 10.1186/s12909-021-03096-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Vražić D, Musić L, Barbarić M, Badovinac A, Plančak L, Puhar I. Dental students’ attitudes and perspectives regarding online learning during the COVID-19 pandemic: a Cross-sectional, Multi-university study. Acta Stomatol Croat. 2022;56(4):395–404. 10.15644/asc56/4/6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Sricharoen P, Yuksen C, Sittichanbuncha Y, Sawanyawisuth K. Teaching emergency medicine with workshops improved medical student satisfaction in emergency medicine education. Adv Med Educ Pract. 2015;6:77–81. 10.2147/AMEP.S72887. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Hennig S, Staatz CE, Bond JA, Leung D, Singleton J. Quizzing for success: evaluation of the impact of feedback quizzes on the experiences and academic performance of undergraduate students in two clinical pharmacokinetics courses. Curr Pharm Teach Learn. 2019;11(7):742–9. 10.1016/j.cptl.2019.03.014. [DOI] [PubMed] [Google Scholar]
  • 19.Ekenze SO, Ugwumba FO, Obi UM, Ekenze OS. Undergraduate surgery clerkship and the choice of surgery as a career: perspective from a developing country. World J Surg. 2013;37(9):2094–100. 10.1007/s00268-013-2073-y. [DOI] [PubMed] [Google Scholar]
  • 20.Oberhelman S, Boswell C, Jensen T, et al. Student experiences and satisfaction with a novel clerkship patient scheduling. Med Educ Online. 2020;25(1):1742963. 10.1080/10872981.2020.1742963. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material 1. (15.4KB, docx)

Data Availability Statement

Raw data can be obtained by the BioStudies public database.


Articles from BMC Medical Education are provided here courtesy of BMC

RESOURCES