Abstract
Purpose
Providing feedback to students in the emergency department during their emergency medicine clerkship can be challenging due to time constraints, the logistics of direct observation, and limitations of privacy. The authors aimed to evaluate the effectiveness of first-person video, captured via Google Glass™, to enhance feedback quality in medical student education.
Material and methods
As a clerkship requirement, students asked patients and attending physicians to wear the Google Glass™ device to record patient encounters and patient presentations, respectively. Afterwards, students reviewed the recordings with faculty, who provided formative and summative feedback, during a private, one-on-one session. We introduced the intervention to 45, fourth-year medical students who completed their mandatory emergency medicine clerkships at a United States medical school during the 2015–2016 academic year.
Results
Students assessed their performances before and after the review sessions using standardized medical school evaluation forms. We compared students’ self-assessment scores to faculty assessment scores in 14 categories using descriptive statistics and symmetric tests. The overall mean scores, for each of the 14 categories, ranged between 3 and 4 (out of 5) for the self-assessment forms. When evaluating the propensity of self-assessment scores toward the faculty assessment scores, we found no significant changes in all 14 categories. Although not statistically significant, one fifth of students changed perspectives of their clinical skills (history taking, performing physical exams, presenting cases, and developing differential diagnoses and plans) toward faculty assessments after reviewing the video recordings.
Conclusion
First-person video recording still initiated the feedback process, allocated specific time and space for feedback, and possibly substituted for the direct observation procedure. Additional studies, with different outcomes and larger sample sizes, are needed to understand the effectiveness of first-person video in improving feedback quality.
Keywords: clerkship, emergency medicine, feedback, medical student education, first-person video
Introduction
The emergency medicine (EM) clerkship is a unique clinical experience required by most medical schools. The majority of the learning during the clerkship comes from observing, practicing, and receiving feedback. A key component of training in the EM clerkship is thorough feedback based on various patient interactions. The feedback given is critical for students to refine their clinical skills.1 Nonetheless, studies suggest that feedback received in EM occurs infrequently.2 Lack of time, space, and privacy in a busy emergency department (ED) environment limits availability for direct observation of students’ clinical skills.3 Only one third of clerkship directors met with medical students during the mid-portion of their rotation and, learners spent less than 1% of their time in the ED under direct observation.4 Without witnessing student–patient interactions, feedback is unlikely to be of value.
Previous studies illustrate the use of first-person video recording systems for teaching in various levels of training from medical students to resident education. The recording system is also widely utilized in various fields of medicine, including surgery, family medicine, disaster relief, diagnostics, nursing, autopsy and postmortem examinations, wound care, and different medical sub-specialties.5–7 To date, we found very limited studies on the use of this technique for teaching in EM. For this study, we aimed to evaluate the effectiveness of first-person video recording, captured by Google Glass™, to enhance feedback quality in medical student education during the EM clerkship. This study focuses on the use of Google Glass™ in the ED, a unique medical setting in comparison to previous Google Glass™-related studies. We hypothesized that reviewing Google Glass™ recordings, with feedback from faculty, would provide students with insightful feedback on their clerkship performance; thus, aligning students’ self-evaluation scores closer to faculty evaluation scores.
Material and methods
Study protocol
We conducted a cross-sectional study at a United States medical school during the 2015–2016 academic year. During their fourth-year EM clerkship, each student used the Google Glass™ device to record a patient encounter and a patient presentation in an urban, tertiary care, university-based ED. The clerkship director trained the students in the use of Google Glass™. The patient, assisted by the medical student, wore the Google Glass™ device and recorded the student during their encounter. Any ED patients who were willing to use the Google Glass™ device to record the students’ patient encounters were included in the study; there were no exclusions specified. After the completion of the patient encounter, a supervised attending or resident physician wore the Google Glass™ device and recorded the student’s patient presentation. As per the EM clerkship requirements, each student had to attend the 30-minute Google Glass™ feedback session and review their recordings with the clerkship director. The students were asked to assess their performances before and after the review session using standardized medical school evaluation forms (Table 1). The evaluation form was created by the division of medical education of the medical school. The medical school has utilized this form in all required clerkships, including family medicine, surgery, internal medicine, and gynecology for the past five years.
Table 1.
Core competency assessmenta | Assessment scores | ||||
---|---|---|---|---|---|
Knowledge: knowledge base of relevant basic and clinical science areas (k) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Patient care: observed history and physical examination skills (pc1) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Patient care: ability to present a patient case with appropriate coherence, organization, and length (pc2) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Patient care: ability to create an appropriate and prioritized differential diagnosis (pc3) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Patient care: ability to devise a rational plan appropriate to the differential diagnosis (pc4) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Practice-based learning: motivation for learning and enthusiasm for teaching others (pbl1) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Practice-based learning: informatics and critical appraisal skills (pbl2) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Practice-based learning: self directed learning skills and likelihood of becoming an effective lifelong learner (pbl3) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Interpersonal and communication skills: therapeutically and ethically sound patient relationships (ic1) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Interpersonal and communication skills: use of open-ended and facilitative interviewing techniques (ic2) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Professionalism: integrity, accountability, and teamwork (p1) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Professionalism: humanistic qualities and respect for diversity (p2) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Professionalism: sensitivity and responsiveness to patients’ culture, age, gender, and disabilities (p3) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Systems-based practice: understanding of health systems, population health, and socioeconomic implications of care (sb1) | Problematic: not at expected level of proficiency in this area [1] | Adequate but below expected proficiency level [2] | At expected (Average) [3] | Above expected for level of training [4] | Clearly outstanding (top 5%–10% of all students) [5] |
Note:
Corresponding variables for each question in parentheses.
Abbreviations: k, knowledge; pc, patient care; pbl, practice-based learning; ic, interpersonal and communication skills; p, professionalism; sb, systems-based practice.
We collected students’ pre- and post-self-assessment forms and obtained standardized faculty assessment forms. Standardized faculty assessment forms were a collation of individual shift evaluations, which were completed by multiple faculty evaluators. The clerkship director completed the forms and submitted the forms as the students’ grades for the clerkship. Each form consists of 14 categories which evaluate students on six core competencies: knowledge, patient care, practice-based learning, interpersonal and communication skills, professionalism, and systems-based practice, which mimic residency training objectives. The form reports the scores on a Likert scale (from 1 to 5): problematic (not at expected level of proficiency in this area), adequate (below expected level), at expected level, above expected level, and clearly outstanding. The study was approved by the Institutional Review Board at the university as exempt.
Ethics statement
This study has been approved by the UC Irvine Institutional Review Board (UCI IRB) as Exempt from Federal regulations in accordance with 45 CFR 46.101. Informed consent was waived by the UCI IRB for this study.
Data analysis
We compared the pre-self-assessment, post-self-assessment, and faculty assessment scores using descriptive statistics. We conducted Stuart–Maxwell symmetry and marginal homogeneity tests to obtain the Stuart–Maxwell test statistics for each variable and to determine whether the pre- and post-self-assessment scores differed/deviated from the faculty evaluation scores. A positive value indicates that the student’s post-self-assessment score moved closer to the faculty evaluation score compared to the pre-self-assessment score. A negative value indicates that the student’s post-self-assessment score moved further away from the faculty evaluation score compared to the pre-self-assessment score. A value of 0 indicates no change in either direction for the student’s self-assessment score in relation to the faculty evaluation score. A p-value of 0.05 or less was considered significant. Statistical analyses were performed using StataCorp. 2015 (Stata Statistical Software: Release 14, StataCorp LP, College Station, TX, USA).
Results
We reviewed a total of 135 assessment forms from 45 participants: 45 pre-session (student’s pre-self-assessment), 45 post-session (student’s post-self-assessment), and 45 faculty evaluations. The overall mean scores for all forms, for each of the 14 categories, ranged between “at expected level” (3) and “above expected level” (4) (Table 2 and Figure 1). There were two missing scores in the practice-based learning category which were taken into consideration when conducting the statistical analyses.
Table 2.
Core competency variable | Student pre-self-assessment Mean (95% CI) [n]a | Student post-self-assessment Mean (95% CI) [n] | Faculty assessment Mean (95% CI) [n] |
---|---|---|---|
Knowledge | 3.4 (3.20–3.60) | 3.4 (3.21–3.59) | 3.8 (3.57–3.99) |
k | [n=45] | [n=45] | [n=45] |
Patient care | 3.8 (3.63–4.06) | 3.6 (3.42–3.83) | 3.8 (3.60–4.05) |
pc1 | [n=45] | [n=45] | [n=45] |
Patient care | 3.6 (3.41–3.84) | 3.6 (3.33–3.78) | 3.8 (3.58–4.02) |
pc2 | [n=45] | [n=45] | [n=45] |
Patient care | 3.6 (3.36–3.75) | 3.6 (3.41–3.84) | 3.9 (3.70–4.17) |
pc3 | [n=45] | [n=45] | [n=45] |
Patient care | 3.4 (3.25–3.60) | 3.5 (3.27–3.67) | 3.9 (3.71–4.16) |
pc4 | [n=45] | [n=45] | [n=45] |
Practice-based learning | 4.1 (3.93–4.33) | 4.2 (3.94–4.41) | 3.9 (3.65–4.08) |
pbl1 | [n=45] | [n=45] | [n=45] |
Practice-based learning | 3.6 (3.33–3.83) | 3.6 (3.33–3.78) | 3.84 (3.63–4.05) |
pbl2 | [n=43] | [n=43] | [n=43] |
Practice-based learning | 4.1 (3.90–4.33) | 4.1 (3.90–4.33) | 3.8 (3.64–4.05) |
pbl3 | [n=45] | [n=45] | [n=45] |
Interpersonal and communication skills | 4.3 (4.04–4.49) | 4.1 (3.83–4.35) | 3.9 (3.66–4.11) |
ic1 | [n=45] | [n=45] | [n=45] |
Interpersonal and communication skills | 3.9 (3.65–4.08) | 3.8 (3.56–4.04) | 3.9 (3.66–4.11) |
ic2 | [n=45] | [n=45] | [n=45] |
Professionalism | 4.2 (4.02–4.38) | 4.1 (3.93–4.33) | 4.0 (3.75–4.20) |
p1 | [n=45] | [n=45] | [n=45] |
Professionalism | 4.2 (3.99–4.41) | 4.1 (3.92–4.34) | 3.9 (3.68–4.14) |
p2 | [n=45] | [n=45] | [n=45] |
Professionalism | 4.2 (3.99–4.41) | 4.1 (3.89–4.29) | 3.9 (3.71–4.16) |
p3 | [n=45] | [n=45] | [n=45] |
Systems-based practice | 3.6 (3.38–3.82) | 3.7 (3.44–3.89) | 3.6 (3.41–3.79) |
sb1 | [n=45] | [n=45] | [n=45] |
Note:
n: sample size.
Abbreviations: k, knowledge; pc, patient care; pbl, practice-based learning; ic, interpersonal and communication skills; p, professionalism; sb, systems-based practice.
The symmetric test analyses show that the majority of the student scores did not change in either direction when comparing the pre- and post-self-assessment scores with the faculty evaluation scores. Additionally, none of the results were statistically significant (Table 3).
Table 3.
Core competency variable | −1 [n]a | 0 [n] | +1 [n] | +2 [n] | p-value (Prob>chi2) |
---|---|---|---|---|---|
Knowledge | 4.4% | 86.7% | 8.9% | – | 0.36 |
k | [n=2] | [n=39] | [n=4] | ||
Patient care | 20.0% | 64.4% | 15.6% | – | 0.31 |
pc1 | [n=9] | [n=29] | [n=7] | ||
Patient care | 17.8% | 62.2% | 20.0% | – | 0.42 |
pc2 | [n=8] | [n=28] | [n=9] | ||
Patient care | 20.0% | 71.1% | 8.9% | – | 0.24 |
pc3 | [n=9] | [n=32] | [n=4] | ||
Patient care | 11.1% | 73.3% | 15.6% | – | 0.80 |
pc4 | [n=5] | [n=33] | [n=7] | ||
Practice-based learning | 11.1% | 84.4% | 2.2% | 2.2% | 0.28 |
pbl1 | [n=5] | [n=38] | [n=1] | [n=1] | |
Practice-based learning | 9.3% | 79.1% | 11.6% | – | 0.93 |
pbl2 | [n=4] | [n=34] | [n=5] | ||
Practice-based learning | 6.7% | 86.7% | 6.7% | – | 0.54 |
pbl3 | [n=3] | [n=39] | [n=3] | ||
Interpersonal and communication skills | 15.6% | 77.8% | 6.7% | – | 0.51 |
ic1 | [n=7] | [n=35] | [n=3] | ||
Interpersonal and communication skills | 22.2% | 60.0% | 15.6% | 2.2% | 0.55 |
ic2 | [n=10] | [n=27] | [n=7] | [n=1] | |
Professionalism | 11.1% | 80.0% | 8.9% | – | 0.90 |
p1 | [n=5] | [n=36] | [n=4] | ||
Professionalism | 13.3% | 75.6% | 11.1% | – | 0.93 |
p2 | [n=6] | [n=34] | [n=5] | ||
Professionalism | 6.7% | 82.2% | 8.9% | 2.2% | 0.72 |
p3 | [n=3] | [n=37] | [n=4] | [n=1] | |
Systems-based practice | 8.9% | 80.0% | 11.1% | – | 0.93 |
sb1 | [n=4] | [n=36] | [n=5] |
Notes:
n: sample size; negative value (ie, −1): student and faculty evaluation scores differ; 0: no difference; positive value (ie, +1, +2): student and faculty evaluation scores match.
Discussion
The use of first-person video recording in feedback and assessment of medical students has significantly increased in the last decade. Educators apply this modality to enhance the learning experience, perform assessments, and teach procedures. The majority of available studies illustrate that first-person video is a useful learning aid in various settings, including operating rooms, primary care clinics, standardized patient encounters, and other settings except in the ED.5–7 The videos reviewed by students, with expert feedback, can improve student performance.8 Our study reinstated the feasibility of using Google Glass™ with ED patients, but we did not find a significant impact of incorporating Google Glass™ with the mandatory feedback sessions on learners’ self-perceptions.9
Google Glass™ did provide many theoretical advantages when giving feedback to learners. First, Google Glass™ provided context and refreshed the learners’ memories of their patient encounters; thus, this occurrence should enhance the effects of feedback for learners.10 Reviewing video clips should allow the educator to reference specific clinical skills and highlight aspects that the student may have otherwise overlooked. Next, Google Glass™ creates an opportunity for educators and learners to discuss one-on-one in a designated space and time, a rare opportunity within a busy ED environment. The educator and learner each prepared for this educational feedback session; which, in turn, encouraged a positive learning environment. Third, Google Glass™ recordings allowed learners to see from the patient’s point of view. This unique perspective re-emphasizes the importance of nonverbal communication skills and the student’s professionalism. Lastly, in addition to providing feedback, first-person review of student performance has significant potential for improving the summative evaluation process at the end of a clerkship. By utilizing the Google Glass™ videos, the faculty will have the ability to thoroughly review a student’s presentation skills and patient interactions at their leisure and convenience. By providing a more prepared environment for the feedback sessions, we expected students to be more receptive to constructive feedback from faculty.
Although not statistically significant, one fifth of students changed perspectives of their clinical skills (history taking, performing physical exams, presenting cases, and developing differential diagnoses and plans) toward faculty assessments after reviewing the video recordings. This suggests that Google Glass™ could serve as an alternative for directly observed history and physical examination skills, which are typically required by the Liaison Committee on Medical Education.
By including a mandatory, first-person video component, our clerkship curriculum ensured that at least one opportunity for meaningful feedback was provided to every student. As described in the ABCs of Learning and Teaching Medicine, “a good course ensures that regular feedback opportunities are built in, so that both teachers and learners come to expect and plan for them.” 10
Limitations
Our study found that Google Glass™ video recordings have no statistically significant impact on student self-assessment scoring when compared to faculty evaluation scoring. It is unclear whether this result is due to the ineffectiveness of first-person video itself for altering student perceptions or the quality of the faculty–student review sessions, where the videos were reviewed. Faculty who provide feedback also play an important role in the success of this process. Unstructured and/or unconstructive feedback could have contributed to the minimal changes in our findings. The faculty should receive formal training on giving feedback to ensure the efficiency of the review session.
We used the faculty evaluations as our gold standard with an understanding that it is an imperfect gold standard. Future studies should consider using assessments from patients in conjunction with faculty assessments, or additional Google Glass™ recordings after the initial feedback session to better evaluate the effectiveness of this feedback process.
There are limitations in the study design, as this was a cross-sectional study with only 45 medical student records. A larger sample size would provide more accurate comparisons with greater generalizability. Additionally, comparisons between a “Google Glass™ video group” and a control “no Google Glass™ video group” could provide more information in regards to how influential the Google Glass™ videos may be when students complete the pre- and post-self-assessment forms. Furthermore, the lack of demographic information of the participants such as gender, age, chief complaint of patients, and work experience of attending physicians, which was not collected for this study, limits the generalizability of these findings to other clerkships and medical schools.
There may have been a bias when choosing patients to record the students’ interactions. Students may have asked patients who they have better rapport with, to record their interactions. We also must consider the “Hawthorne Effect”: students may have performed differently, because they knew they were being recorded by the Google Glass™ device and the faculty would review the videos at a later time. As a result, students may have performed more thoroughly and professionally during the recordings.
Educators should be aware that there is no absolute correlation between self-evaluation and clinical practice. There is no concrete evidence that self-evaluation scores predict how a student may perform in future clinical settings.
Conclusion
Although the study did not demonstrate statistically significant changes in students’ perspectives of their clerkship performance, reviewing first-person video recordings of medical student clinical interactions during mandatory feedback sessions, could offer various advantages to both learners and educators. Future prospective studies, with larger sample sizes and different measurable outcomes, are needed.
Data sharing statement
The data that support the findings of this study are available from the corresponding author upon request.
Footnotes
Disclosure
The authors report no conflicts of interest in this work.
References
- 1.Bernard AW, Kman NE, Khandelwal S. Feedback in the emergency medicine clerkship. West J Emerg Med. 2011;12(4):537–542. doi: 10.5811/westjem.2010.9.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Yarris LM, Linden JA, Gene Hern H, et al. Attending and resident satisfaction with feedback in the emergency department. Acad Emerg Med. 2009;16(Suppl 2):S76–S81. doi: 10.1111/j.1553-2712.2009.00592.x. [DOI] [PubMed] [Google Scholar]
- 3.Fromme HB, Karani R, Downing SM. Direct observation in medical education: a review of the literature and evidence for validity. Mt Sinai J Med. 2009;76(4):365–371. doi: 10.1002/msj.20123. [DOI] [PubMed] [Google Scholar]
- 4.Khandelwal S, Way DP, Wald DA, et al. State of undergraduate education in emergency medicine: a national survey of clerkship directors. Acad Emerg Med. 2014;21(1):92–95. doi: 10.1111/acem.12290. [DOI] [PubMed] [Google Scholar]
- 5.Wei NJ, Dougherty B, Myers A, Badawy SM. Using google Glass™ in surgical settings: systematic review. JMIR Mhealth Uhealth. 2018;6(3):e54. doi: 10.2196/mhealth.9409. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Youm J, Wiechmann W. Formative feedback from the first-person perspective using google Glass™ in a family medicine objective structured clinical examination station in the United States. J Educ Eval Health Prof. 2018;15:5. doi: 10.3352/jeehp.2018.15.5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Dougherty B, Badawy SM. Using google Glass™ in nonsurgical medical settings: systematic review. JMIR Mhealth Uhealth. 2017;5(10):e159. doi: 10.2196/mhealth.8671. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Hammoud MM, Morgan HK, Edwards ME, Lyon JA, White C. Is video review of patient encounters an effective tool for medical student learning? A review of the literature. Adv Med Educ Pract. 2012;3:19–30. doi: 10.2147/AMEP.S20219. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Tully J, Dameff C, Kaib S, Moffitt M. Recording medical students’ encounters with standardized patients using Google Glass: providing end-of-life clinical education. Acad Med. 2015;90(3):314–316. doi: 10.1097/ACM.0000000000000620. [DOI] [PubMed] [Google Scholar]
- 10.Cantillon P, Wood D, editors. ABC of learning and teaching in medicine. 2nd ed. West Sussex, United Kingdom: John Wiley & Sons, Ltd; 2010. [Google Scholar]