Skip to main content
Journal of Medical Education and Curricular Development logoLink to Journal of Medical Education and Curricular Development
. 2017 Mar 15;4:2382120516684829. doi: 10.1177/2382120516684829

Using Learner-Centered, Simulation-Based Training to Improve Medical Students’ Procedural Skills

Serkan Toy 1,, Robert SF McKay 1, James L Walker 1, Scott Johnson 1, Jacob L Arnett 1
PMCID: PMC5736291  PMID: 29349329

Abstract

Purpose:

To evaluate the effectiveness of a learner-centered, simulation-based training developed to help medical students improve their procedural skills in intubation, arterial line placement, lumbar puncture, and central line insertion.

Method:

The study participants were second and third year medical students. Anesthesiology residents provided the training and evaluated students’ procedural skills. Two residents were present at each station to train the medical students who rotated through all 4 stations. Pre/posttraining assessment of confidence, knowledge, and procedural skills was done using a survey, a multiple-choice test, and procedural checklists, respectively.

Results:

In total, 24 students were trained in six 4-hour sessions. Students reported feeling significantly more confident, after training, in performing all 4 procedures on a real patient (P < .001). Paired-samples t tests indicated statistically significant improvement in knowledge scores for intubation, t(23) = −2.92, P < .001, and arterial line placement, t(23) = −2.75, P < .001. Procedural performance scores for intubation (t(23) = −17.29, P < .001), arterial line placement (t(23) = −19.75, P < .001), lumbar puncture (t(23) = −16.27, P < .001), and central line placement (t(23) = −17.25, P < .001) showed significant improvement. Intraclass correlation coefficients indicated high reliability in checklist scores for all procedures.

Conclusions:

The simulation sessions allowed each medical student to receive individual attention from 2 residents for each procedure. Students’ written comments indicated that this training modality was well received. Results showed that medical students improved their self-confidence, knowledge, and skills in the aforementioned procedures.

Keywords: Simulation, learner centered, procedural training, medical education, anesthesia

Introduction

Third year medical students face a challenging task of shifting gears from a curriculum heavy in basic sciences to applying this vast amount of medical knowledge into caring for patients. However, 3 decades of research into expertise development demonstrates that mastery in content knowledge does not guarantee a successful application of this knowledge in the form of procedural skills.13

Based on the recommendations by the Association of American Medical Colleges, every graduating medical student should be able to perform a number of basic procedures, such as venipuncture, intravenous catheter insertion, nasogastric tube insertion, and foley catheter insertion.4 However, medical students have experienced difficulty in developing self-confidence as well as competency in procedural skills due to lack of opportunities to practice within a safe and supervised environment.5,6

Educational researchers indicate that individuals’ performance in real-life settings depend on their domain knowledge that combines the knowledge of facts and concepts (content knowledge) and the knowledge of how to perform certain operations and procedures (procedural knowledge) in a specific domain such as medicine.711 Therefore, an educational program must provide medical students with the necessary content knowledge and procedural skills to perform as expected in diverse clinical settings.

Use of simulation-based training shows improvements in learners’ knowledge, skills, attitudes, and performance.1215 Moreover, simulation-based training for medical education leads to effective learning as it provides “repetitive practice, ability to integrate into curriculum, ability to alter the degree of difficulty, ability to capture clinical variation, immediate feedback, and approximation of clinical practice.”16

Several studies indicate that medical students typically do not receive standardized hands-on training on advanced procedural skills1719 (our target population is no exception). Among these are intubation, arterial line placement, central line insertion, and lumbar puncture.17 These are advanced and somewhat invasive procedures that can cause discomfort and/or complications for patients. In this study, we chose to provide second and third year medical students with a student-centered, simulation-based training on these advanced procedures.

The literature suggests that residents can make a significant contribution to medical student education by providing hands-on learning opportunities and constructive feedback in a safe learning environment and by influencing students’ subsequent career choice, professional growth, and clerkship performance.2022 In this study, anesthesiology residents provided training and debriefing and also served as the judges of medical students’ procedural skills.

We hypothesized that providing simulation-based training would improve second and third year medical students’ confidence, content knowledge, as well as their performance on intubation, arterial line placement, central line insertion, and lumbar puncture.

Methods

We designed the simulation training to include 4 procedural skills: intubation, arterial line placement, lumbar puncture, and central line insertion. A total of 8 anesthesiology residents provided the simulation training and served as the judges of the students’ skills. Each resident received a detailed content outline with specific learning objectives and key teaching points for the procedures. Two train-the-trainer sessions were held for residents that included a standardized teaching for each procedure using simulation equipment. The residents also received instruction on how to use the assessment checklists and went through a calibration process to maximize the interrater consistency. The calibration process included residents evaluating each other using the checklists and comparing their scoring behavior and resolving any discrepancies.

Setting and participants

The University of Kansas Medical Center Institutional Review Board approved the study. The study participants were second and third year medical students at the University of Kansas School of Medicine—Wichita. This training was not part of the regular medical school curriculum, which currently consists of 2 phases. Phase I includes 12 learning modules where year 1 and year 2 medical students learn about core basic science disciplines as well as basic clinical skills (standardized patient encounters), preventive medicine, ethics, and behavioral sciences. Learning modules in this phase use problem-based learning where students engage in a clinical case pertinent to each module in a student-centered, small-group setting. During phase II, year 3 and year 4 medical students complete their required clerkships in core clinical areas and take additional clerkships and electives to help provide well-rounded clinical exposure.

Participation in this training was on voluntary basis. Time constraints limited this experience to 24 medical students. More students volunteered than we could accommodate, so an equal number of students from each year were randomly selected. For each procedural skill, a station was developed with the appropriate simulation model and other equipment necessary for each specific procedure.

Data sources/measurement tools

Pre/posttraining assessments of confidence, knowledge, and procedural skills were performed using a survey with a 5-point Likert scale, a 40-item multiple-choice test (same items were used in pre/posttest), and procedural checklists, respectively. In addition, we documented whether or not students had previously performed any of the procedures on a real patient and/or on a simulator/task trainer. Study participants filled out a postintervention satisfaction survey designed to measure the effectiveness of the simulation training from the student’s perspective. Finally, 2 open-ended responses documented the most valuable aspects of the training for students as well as their suggestions for improvement (Appendix 1 includes items from pre/postintervention self-confidence and postintervention satisfaction surveys).

A content outline, including goals and objectives for the simulation training, provided the blueprint for the multiple-choice test (see Appendix 2 for sample test items). This helped ensure content validity for the knowledge test. Items were developed by 4 experienced anesthesiologists to measure pertinent information for each of the procedures. The test was pilot tested with 5 anesthesiology residents for clarity, accuracy, and difficulty level. One of the intubation items and 2 of the lumbar puncture items were found to rely heavily on rote memorization and were replaced with more clinically applicable critical-thinking questions.

Before and immediately after the training, demonstration of actual skills was measured using procedural skills checklists (see Appendix 3 for the procedural skills checklists.) Checklists were comprehensive and included all pertinent critical steps for each of the procedures. There were a total of 25 tasks for intubation, 11 for arterial line placement, 15 for central line insertion, and 15 for lumbar puncture. The total number of completed tasks was calculated for each of the procedures for statistical analyses. Checklists were filled out by the anesthesiology residents supervising the simulated experience. Two residents observed each student performing the procedure at their station and independently filled out the checklists. The average ratings were used for statistical analyses.

Statistical analyses

In this study, medical students served as their own controls for statistical analyses. Comparisons between baseline and postintervention ratings of medical students regarding their confidence in performing the included procedures were made using Wilcoxon signed rank test. Paired-samples t tests were used to analyze whether or not there was a statistically significant improvement in students’ knowledge and procedural skills over the baseline scores. G*Power was used to estimate that our sample of 24 would be sufficient to detect an effect size of Cohen’s d value of .45 (α = .05) with 80% power for Wilcoxon signed rank tests and paired-samples t tests. For all tests, P < .05 was accepted as statistically significant. Intraclass correlation coefficients (ICC) were used as a measure of interrater reliability. The IBM Statistical Package for the Social Sciences 19 (IBM Corp., Armonk, NY, USA) was used for statistical analyses.

Results

In total, 24 second and third year medical students (12 from each year) were trained in six 4-hour sessions. At the beginning of each session, one of the authors (S.T.) explained the study protocol and obtained medical students’ consent for participation. Students spent on average 40 minutes on pre/postconfidence questionnaire and knowledge tests. A total of 3 hours were devoted to actual simulation-based training activities. Each student spent on average 40 minutes at each of the 4 stations: intubation, arterial line placement, lumbar puncture, and central line insertion. Baseline data for procedural skills were obtained while medical students tried performing each procedure before receiving any training from residents. There were 2 residents present at each station to train 1 medical student at a time. One resident explained the critical steps, whereas another demonstrated how to perform the procedure. Then, students were given, on average, 25 minutes to practice each procedure while they received feedback from residents. Each medical student rotated through all 4 stations. Posttest took place immediately after the training session.

As to prior experience, most students reported not having performed any of the procedures on a real patient. Only 3 students (12.5%) reported performing intubation, and 1 student (4.2%) reported performing central line insertion on a real patient (see Table 1 for students’ self-reported prior experience). Although 17 students (70.8%) reported having a simulated experience in intubation and lumbar puncture, most did not have any exposure to arterial line placement or central line insertion.

Table 1.

Students’ self-reported prior experience on given procedures.

Have you ever performed On a real patient
On a simulator/task trainer
Yes (%) No (%) Yes (%) No (%)
Intubation 3 (12.5) 21 (87.5) 17 (70.8) 7 (29.2)
Arterial line placement 0 (0) 24 (100) 1 (4.2) 23 (95.8)
Lumbar puncture 0 (0) 24 (100) 17 (70.8) 7 (29.2)
Central line insertion 1 (4.2) 23 (95.8) 2 (8.3) 22 (91.7)

Students’ self-reported confidence scores

Medical students felt significantly more confident, after training, in performing all 4 procedures on a task trainer/simulator as well as on a real patient as indicated by Wilcoxon signed rank tests (P < .001) (see Tables 2 and 3 for mean, standard deviation, median, and Wilcoxon signed rank test results for pre and post self-confidence survey scores.

Table 2.

Mean, standard deviation, median, and Wilcoxon signed rank test results for pre- and postsurvey scoresa regarding self-confidence for performing on a task trainer/simulator.

Pretest Posttest Z score P value
Mean Median N SD Mean Median N SD
Intubation 2.71 3.00 24 1.16 4.38 4.50 24 0.77 −3.97 <.001*
Arterial line placement 1.83 2.00 24 0.96 4.29 4.00 24 0.86 −4.18 <.001*
Lumbar puncture 2.63 3.00 24 1.10 4.33 4.00 24 0.76 −3.89 <.001*
Central line insertion 1.79 1.50 24 0.93 4.13 4.00 24 0.99 −4.19 <.001*
a

Prompt: I am confident in my skills performing the following procedures on a task trainer/simulator (5-point Likert scale from 5= strongly agree to 1= strongly disagree).

*

Just denotes statistical significance.

Table 3.

Mean, standard deviation, median, and Wilcoxon signed rank test results for pre- and postsurvey scoresa regarding self-confidence for performing on a real patient.

Pretest
Posttest
Z score P value
Mean Median N SD Mean Median N SD
Intubation 1.71 1.50 24 0.86 3.33 3.50 24 0.96 −3.90 <.001*
Arterial line placement 1.13 1.00 24 0.34 3.08 3.00 24 1.02 −4.09 <.001*
Lumbar puncture 1.71 1.00 24 1.00 3.33 3.50 24 0.96 −3.90 <.001*
Central line insertion 1.17 1.00 24 0.38 2.92 3.00 24 1.02 −4.07 <.001*
a

Prompt: I am confident in my skills performing the following procedures on a real patient (5-point Likert scale from 5= strongly agree to 1= strongly disagree).

*

Just denotes statistical significance.

Knowledge scores

Participating in the simulation-based training helped medical students improve their knowledge scores immediately after training on all included procedures, although this improvement did not reach significance for lumbar puncture or central line insertion. Paired-samples t tests indicated statistically significant improvement in knowledge scores for intubation (pre-mean = 5.50, SD = 1.22 vs post-mean = 6.58, SD = 1.14), t(23) = −2.92, P < .001, and arterial line placement (pre-mean = 5.71, SD = 1.20 vs post-mean = 6.67, SD = 1.24), t(23) = −2.75, P < .001 (see Table 4).

Table 4.

Mean, standard deviation, and t-test results for pre- and postknowledge test scores.a

Pretest
Posttest
t P value
Mean N SD Mean N SD
Intubation 5.50 24 1.22 6.58 24 1.14 −2.92 .008*
Arterial line placement 5.71 24 1.20 6.67 24 1.24 −2.75 .01*
Lumbar puncture 4.67 24 1.44 4.75 24 1.36 −.23 .82
Central line insertion 6.71 24 1.23 7.25 24 1.48 −1.80 .09
a

Perfect score for each procedure is 10.

*

Just denotes statistical significance.

Procedural skills

Beyond improvement in students’ self-reported confidence on performing the included procedures, we examined whether or not their actual hands-on performance also improved. Paired-samples t tests indicated statistically significant improvement in procedural performance scores for all 4 procedures: intubation (t(23) = −17.29, P < .001), arterial line placement (t(23) = −19.75, P < .001), lumbar puncture (t(23) = −16.27, P < .001), and central line placement (t(23) = −17.25, P < .001) (see Table 5 for mean, standard deviation, and t-test results for pre- and postprocedural checklist scores).

Table 5.

Mean, standard deviation, and t-test results for pre- and postprocedural checklist scores.a.

Pretest
Posttest
t P
Mean N SD Mean N SD
Intubation 6.90 24 3.91 19.80 24 2.33 −17.29 <.001*
Arterial line placement 2.40 24 1.70 10.19 24 1.28 −19.75 <.001*
Lumbar puncture 4.85 24 2.22 12.56 24 1.08 −16.27 <.001*
Central line insertion 3.79 24 2.63 13.15 24 2.59 −17.25 <.001*
a

Perfect scores for each procedure are as follows: intubation: 25, arterial line placement: 11, lumbar puncture: 15, and central line insertion: 20.

*

Just denotes statistical significance.

A high degree of agreement was found in the raters’ pretest as well as posttest checklist scores for all procedures. The highest average ICC was found in pretest intubation scores, .928 with a 95% confidence interval from .834 to .969 (F23,23 = 13.926, P < .001). The lowest average ICC, though still a high degree of agreement, was in posttest lumbar puncture scores, which was .757 with a 95% confidence interval from .437 to .895 (F23,23 = 4.108, P = .001).

We also stratified medical students by year to examine whether or not there were differences on knowledge and procedural scores based on the year of medical school. We did not find any major difference in the pretest or posttest knowledge scores between year 2 and year 3 medical students (see Figure 1). Year 2 and year 3 students also received similar pretest and posttest checklist scores for all procedures (see Figure 2).

Figure 1.

Figure 1.

Medical students’ knowledge test scores sorted by year 2 medical students (dark bars) vs year 3 medical students (light bars). As indicated by overlapping standard error bars, year of medical school did not have an effect on the pretest or posttest knowledge test scores.

Figure 2.

Figure 2.

Medical students’ procedural skills performance score percentages sorted by year 2 medical students (dark bars) vs year 3 medical students (light bars). As indicated by overlapping standard error bars, year of medical school did not have an effect on the pretest or posttest procedural skills scores.

Overall medical student reaction to the simulation-based training

Most of the students (23 of 24) indicated that they were satisfied with the overall experience provided in the learner-centered, simulation-based training, and 1 student felt neutral regarding his or her overall satisfaction. The frequencies (percentages) of medical student responses to items included in the satisfaction survey are shown in Table 6. Students felt that this learning environment was conducive to learning and they felt better prepared for clerkships, thanks to this training experience. Responses also indicated that students would recommend this training to other medical students.

Table 6.

Medical student responses to satisfaction survey.

To what extent do you agree/disagree with the following Strongly agree Agree Neutral Disagree Strongly disagree N
I was satisfied with the overall learning experience provided 19 (79.2%) 4 (16.7%) 1 (4.2%) 0 (0%) 0 (0%) 24
Learning environment was conducive to learning 14 (58.3%) 9 (37.5%) 0 (0%) 1 (4.2%) 0 (0%) 24
Thanks to this training, I feel better prepared for the clerkships 15 (62.5%) 7 (29.2%) 2 (8.3%) 0 (0%) 0 (0%) 24
I would recommend this training to others 19 (79.2%) 4 (16.7%) 1 (4.2%) 0 (0%) 0 (0%) 24

In addition to reinforcing these findings, students’ written feedback also highlighted the importance of learning from residents, receiving one-on-one attention and individualized feedback on their performance (see Appendix 4 for written feedback samples organized by common themes). Students also suggested that there should be more time given for practice as well as to have multiple training sessions over a period of time. Some students also felt that a formal didactic on the included procedures would help them learn more about the subject matter.

Discussion

Our results showed that medical students improved their self-confidence, knowledge, and skills in the aforementioned procedures through the use of individualized instruction in a simulated environment. Written comments indicated that this training modality has been very well received by the students. This finding is in line with the literature on learner-centered educational modalities. Simply providing medical students with task trainers and expecting them to repeatedly practice clinical skills may not necessarily translate into competency. Deliberate practice focused on reaching a well-defined goal is needed for improving clinical skills.23 Some have suggested that in the lack of clear performance guidelines, trainees may fail to assess their own skills accurately and to identify areas of improvement.24 Residents in this study provided much needed feedback for performance improvement.

One major lesson learned was that focusing on as many as 4 procedures in 1 training session limited the time spent on debriefing at the end of the actual hands-on skills training. This resulted in a smaller increase in students’ knowledge gain for some procedures. A current meta-analysis that examined the literature on the effects of teaching advanced airway management using simulation indicates similar results for other studies.15

Moreover, as time is a major constraint for busy clinicians, this type of training may be hard to sustain as it requires extensive time commitment from students and instructors and is subject to scheduling conflicts.25 Future studies exploring alternative educational modalities seem warranted given the need for learner-centered, simulation-based training for procedural skills.

In this study, we used the evaluation framework of Kirkpatrick26 as a guide while evaluating the effectiveness of this training. This framework suggests that evaluation of training effectiveness should address the (1) participants’ reaction to the training, (2) learning gain, (3) hands-on skill improvement, and (4) actual outcomes occurring as a result of the training. As demonstrated above, we addressed the first 3 steps of this framework. However, performance in simulated experience may not transfer to actual patient care. Future studies should try to measure the impact of simulation-based training on patient outcomes by using longitudinal research design.

A significant limitation of this study might be that it involved a relatively small group of students from a single institution. A multi-institutional study with a larger sample might produce further insights into learner-centered, simulation-based training for improving medical students’ procedural skills. In addition, this study design relied on the same resident pairs to provide baseline and posttraining scores for medical students’ procedural skills. This could introduce a scoring bias because these residents also served as instructors. Ideally, we would have liked to videotape each student’s baseline and posttraining performance to have blinded raters provide the scores. However, there was 1 medical student at each of the 4 stations at any given time, which proved to be logistically challenging to capture each student’s performance in a high-quality video providing a detailed view of all necessary angles for an accurate procedural skills assessment. Given the constraints, real-time skills assessment was the most robust option.

Supplementary Material

Supplementary material
APPENDIX1.pdf (231.1KB, pdf)
Supplementary material
APPENDIX2.pdf (69.5KB, pdf)
Supplementary material
APPENDIX3.pdf (235.9KB, pdf)
Supplementary material
APPENDIX4.pdf (173.8KB, pdf)

Footnotes

Peer Review:Four peer reviewers contributed to the peer review report. Reviewers’ reports totaled 882 words, excluding any confidential comments to the academic editor.

Funding:The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was funded by the Medical Alumni Innovative Teaching Fund at the University of Kansas, School of Medicine.

Declaration of Conflicting Interests:The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Author Contributions: Conceived and designed the study: ST, RSFM, JLW, SJ, JLA. Analyzed the data: ST. Wrote the first draft of the manuscript: ST. Contributed to the writing of the manu ST, RSFM, JLW, SJ, JLA. Agree with manuscript results and conclusions: ST, RSFM, JLW, SJ, JLA. Made critical revisions and approved final version: ST, RSFM, JLW, SJ, JLA. All authors reviewed and approved of the final manuscript.

References

  • 1. Glaser R, Chi M. Overview. In: Chi M, Glaser R, Farr M. eds. The Nature of Expertise. Hillsdale, NJ: Lawrence Erlbaum; 1998: 15–27. [Google Scholar]
  • 2. Pressley M, Snyder BL, Cariglia-Bull T. How can good strategy use be taught to children? In: Cormier SM, Hagman JD.eds. Transfer of Learning: Contemporary Research and Applications. San Diego, CA: Harcourt Brace Jovanovich; 1987:81–120. [Google Scholar]
  • 3. Spiro R, Feltovich PJ, Jacobson MJ, Coulson RL. Knowledge representation, content specification, and the development of skill in situation-specific knowledge assembly: some constructivist issues as they relate to cognitive flexibility theory and hypertext. Educ Technol. 1991;31:22–25. [Google Scholar]
  • 4. Learning objectives for medical student education—guidelines for medical schools: report of Medical School Objectives Project. Acad Med. 1999;74:13–18. [DOI] [PubMed] [Google Scholar]
  • 5. Engum SA. Do you know your students’ basic clinical skills exposure? Am J Surg. 2003;186:175–181. [DOI] [PubMed] [Google Scholar]
  • 6. Sanchez LD, DelaPena J, Kelly SP, Ban K, Pini R, Perna AM. Procedure lab used to improve confidence in the performance of rarely performed procedures. Eur J Emerg Med. 2006;13:29–31. [DOI] [PubMed] [Google Scholar]
  • 7. Glaser R. Education and thinking: the role of knowledge. Am Psychol. 1984;39:93–104. [Google Scholar]
  • 8. Greeno JG, Collins AM, Resnick LB. Cognition and learning. In: Berliner DC, Calfee RC, eds. Handbook of Educational Psychology. New York, NY: Simon & Schuster; 1996:15–46. [Google Scholar]
  • 9. Chi MTH, Feltovich PJ, Glaser R. Categorization and representation of physics problems by experts and novices. Cognitive Sci. 1981;5:121–152. [Google Scholar]
  • 10. Rabinowitz M, Glaser R. Cognitive structure and process in highly competent performance. In: Horowitz FD, O’Brien M. eds. The Gifted and Talented: A Developmental Perspective. Washington, DC: American Psychological Association; 1985:75–98. [Google Scholar]
  • 11. Perkins DN, Salomon G. Are cognitive skills context bound? Educ Res. 1989;18:16–25. [Google Scholar]
  • 12. Hunt EA, Heine M, Hohenhaus SM, Luo X, Frush K. Simulated pediatric trauma team management: assessment of an educational intervention. Pediatr Emerg Care. 2007;23:796–804. [DOI] [PubMed] [Google Scholar]
  • 13. Hunt EA, Hohenhaus SM, Luo X, Frush K. Simulation of pediatric trauma stabilization in 35 North Carolina emergency departments: identification of targets for performance improvement. Pediatrics. 2006;117:641–648. [DOI] [PubMed] [Google Scholar]
  • 14. Sakawi Y, Vetter TR. Airway management and vascular access simulation during a medical student rotation. Clin Teach. 2011;8:48–51. [DOI] [PubMed] [Google Scholar]
  • 15. Kennedy CC, Cannon EK, Warner DO, Cook DA. Advanced airway management simulation training in medical education: a systematic review and meta-analysis. Crit Care Med. 2014;42:169–178. [DOI] [PubMed] [Google Scholar]
  • 16. Okuda Y, Bryson EO, DeMaria S, et al. The utility of simulation in medical education: what is the evidence? Mt Sinai J Med. 2009;76:330–343. [DOI] [PubMed] [Google Scholar]
  • 17. Berg KT, Mealey KJ, Weber DE, et al. Are medical students being taught invasive skills using simulation? Simul Healthc. 2013;8:72–77. [DOI] [PubMed] [Google Scholar]
  • 18. Dehmer JJ, Amos KD, Farrell TM, Meyer AA, Newton WP, Meyers MO. Competence and confidence with basic procedural skills: the experience and opinions of fourth-year medical students at a single institution. Acad Med. 2013;88:682–687. [DOI] [PubMed] [Google Scholar]
  • 19. Promes SB, Chudgar SM, Grochowski CO, et al. Gaps in procedural experience and competency in medical school graduates. Acad Emerg Med. 2009;16:S58–S62. [DOI] [PubMed] [Google Scholar]
  • 20. Talib N, Toy S, Moore K, Quaintance J, Knapp J, Sharma V. Can incorporating inpatient overnight work hours into a pediatric clerkship improve the clerkship experience for students? Acad Med. 2013;88:376–381. [DOI] [PubMed] [Google Scholar]
  • 21. Bing-You R, Edwards J. Residents as teachers. In: Distlehorst LH, Dunnington GL, Folse JR. eds. Teaching and Learning in Medical and Surgical Education. Mahwah, NJ: Lawrence Erlbaum; 2000:169–182. [Google Scholar]
  • 22. Karani R, Fromme HB, Cayea D, Muller D, Schwartz A, Harris IB. How medical students learn from residents in the workplace: a qualitative study. Acad Med. 2014;89:490–496. [DOI] [PubMed] [Google Scholar]
  • 23. Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993;100:363–406. [Google Scholar]
  • 24. Falchikov N, Boud D. Student self-assessment in higher education: a meta-analysis. Rev Educ Res. 1989;59:395–430. [Google Scholar]
  • 25. Jowett N, LeBlanc V, Xeroulis G, MacRae H, Dubrowski A. Surgical skill acquisition with self-directed practice using computer-based video training. Am J Surg. 2007;193:237–242. [DOI] [PubMed] [Google Scholar]
  • 26. Kirkpatrick DL. Techniques for evaluating training. Train Dev J. 1979;33:78–92. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary material
APPENDIX1.pdf (231.1KB, pdf)
Supplementary material
APPENDIX2.pdf (69.5KB, pdf)
Supplementary material
APPENDIX3.pdf (235.9KB, pdf)
Supplementary material
APPENDIX4.pdf (173.8KB, pdf)

Articles from Journal of Medical Education and Curricular Development are provided here courtesy of SAGE Publications

RESOURCES