Abstract
Background
Simulation in medical education provides students with opportunities to practice interviews, examinations, and diagnosis formulation related to complex conditions without risks to patients.
Aim
To examine differences between individual and team participation on learning outcomes and student perspectives through use of virtual patients (VPs) for teaching cranial nerve (CN) evaluation.
Methods
Fifty-seven medical students were randomly assigned to complete simulation exercises either as individuals or as members of three-person teams. Students interviewed, examined, and diagnosed VPs with possible CN damage in the Neurological Exam Rehearsal Virtual Environment (NERVE). Knowledge of CN abnormalities was assessed pre- and post-simulation. Student perspectives of system usability were evaluated post-simulation.
Results
An aptitude-treatment interaction (ATI) effect was detected; at pre-test scores ≤50%, students in teams scored higher (83%) at post-test than did students as individuals (62%, p = 0.02). Post-simulation, students in teams reported greater confidence in their ability to diagnose CN abnormalities than did students as individuals (p = 0.02; mean rating = 4.0/5.0 and 3.4/5.0, respectively).
Conclusion
The ATI effect allows us to begin defining best practices for the integration of VP simulators into the medical curriculum. We are persuaded to implement future NERVE exercises with small teams of medical students.
Introduction
Medical student education involves a complex interplay and dependence between didactic learning and clinically relevant material. While medical educators can schedule the presentation of didactic content as desired, clinical exposure has historically depended largely upon a number of extraneous factors, including clinical volume at a particular practice site, referral patterns, and ultimately, some degree of serendipity. The Liaison Committee on Medical Education (LCME) recognizes the inconsistency in clinical exposure as a serious matter and, consequently, has established Standard ED-2, mandating that faculty define the types of patients and clinical conditions that students will encounter. Furthermore, if exposure to a live patient with a particular clinical condition is not available, it is specified that “the medical student should be able to remedy the gap by a simulated experience (e.g., a standardized patient experience, an online or paper case)” (LCME 2011, p. 7). This regulation is spurring the development of novel simulation systems capable of mimicking complex medical conditions not previously addressed by existing simulation modalities. Virtual patients (VPs) presented through computer-based medium are particularly of interest due to their capacity to display a variety of complex pathologies and clinical scenarios in a cost-effective and easily distributable format (Cook & Triola 2009).
Teaching the diagnosis of cranial nerve abnormalities
One content area in which VPs hold great promise for medical education relates to teaching the diagnosis of neurological abnormalities, such as cranial nerve (CN) disorders. Diagnosis of CN palsies is based primarily on interpreting abnormal findings in the context of the patient’s medical history (Kutschke 1996); however, students currently learn diagnosis primarily through lecture, textbook, and video-based didactics (Gelb et al. 2002). These methods may not sufficiently engage the critical thinking skills required for information synthesis and diagnosis formation. Supervised patient encounters are also utilized for teaching in this topic area, but such exposure can be difficult to provide, as CN palsies are relatively uncommon. Additionally, the abnormal findings displayed by patients with CN palsies are not addressed by existing manikin or physical simulators, and cannot be reproduced using standardized patients. The limited opportunities for practice and evaluation are associated with students’ reports of a low level of knowledge of the neurological exam and low confidence in their abilities in this domain (Schon et al. 2002; Moore & Chalk 2009).
To fill this important role, researchers created a computer-based VP system, the Neurological Exam Rehearsal Virtual Environment (NERVE; Kotranza et al. 2009), for teaching and practicing the assessment and diagnosis of CN abnormalities. NERVE supports conversations between the user and the VP; user-typed questions and instructions can be responded to verbally or behaviorally by the VP. For example, the VP can answer the question, “When did your double vision start?” and can follow the directive, “Look straight ahead” (Figure 1). The system presently recognizes over 1000 questions, and can respond with over 200 pieces of information. In conjunction with the system’s communication feature and virtual examination tools (e.g., eye chart, ophthalmoscope), imbedded clinical scenarios provide opportunities for students to practice conducting patient interviews, identifying the physical findings related to CN injuries, and synthesizing acquired information to formulate a diagnosis. NERVE was pilot tested with a cohort of medical students five months prior to this investigation; system improvements occurred based on outcomes of the pilot study.
Optimizing the use of simulation in medical education
Understanding the curricular implications of NERVE is essential, especially as a call for research put forth by the Association of American Medical College’s (AAMC) Institute for Improving Medical Education encourages educators to examine how simulation technologies should most effectively be integrated into the medical curriculum (AAMC 2007). From a pragmatic standpoint, the use of simulation has created a number of challenges for medical educators in terms of scholastic resources (e.g., faculty, space) and student scheduling. Computer-based VP simulations are affordable, widely distributable, and can be made accessible on demand, making it possible for students to complete simulation exercises beyond the classroom.
An alternative means of enhancing the efficacy of simulation in medical education may be to conduct simulation activities in small teams, as the collaborative learning process is suggested to promote critical thinking and to enhance engagement in a training activity (Gokhale 1995; Kraiger 2008). The appropriateness of team-based learning for interacting with VPs, however, has been relatively unexamined.
Accordingly, the purpose of this study is to examine how the social context in which one interacts with VPs in NERVE (i.e., as an individual or as a member of a small team) influences learning and subsequent student perspectives. Results from this study aim to provide educators with initial guidance as to how VP systems may most effectively be implemented in a medical curriculum.
Methods
Participants
Fifty-seven Year 2 medical students from the University of Central Florida (UCF) College of Medicine (COM) elected to participate in this study, following the study’s approval by the UCF Institutional Review Board. The study occurred in conjunction with a required simulation activity in COM’s P-2: Practice of Medicine module; however, consent to use data for analysis and reporting was voluntary. All students in attendance for the activity consented to participate.
A pre-training questionnaire requesting basic demographic information and self-reported simulator use, gaming experience, and interest and confidence in the area of neurology was completed by all students. Participants included 31 females (54%) and 26 males (46%), ranging from 21 to 37 years (mean = 24.7, standard deviation = 2.7). All participants were in their second week of Year 2 courses and completed education in basic neuroanatomy as part of a larger, integrated module of instruction in the second half of Year 1. Few students (7%) indicated any prior use of a neurologic simulator. Participants in the team treatment reported slightly greater confidence in their knowledge of neurology. Composition of treatments was otherwise deemed to be equivalent.
Procedure
Students were randomly assigned to complete the NERVE simulation exercises as either an individual (n = 27) or as a member of a team (n = 30); students assigned to the team treatment were further randomly assigned to participate in one of the 10 teams, each composed of three members (Figure 2). The simulation activity was conducted in the Clinical Skills and Simulation Center at COM using desktop computers installed with the NERVE scenarios.
Students first completed a brief pre-training questionnaire related to participant details. Pre-training knowledge of CN abnormalities was next measured using a 12-item test comprising three item types: (1) seven multiple-choice items, (2) two open-ended items requiring recognition of abnormal physical findings from a series of static photographs, and (3) three open-ended items based on short clinical examination videos (Larsen & Stensaas 2012). The video-based test items asked students to indicate the CN and laterality that could lead to the physical presentation being viewed. Test items were constructed by the author (JC) to match objectives and content specifically addressed by the NERVE activity; multiple-choice and image-based items were pilot tested with a cohort of medical students five months prior to this investigation. The items were randomly placed on the pre-test, and all students completed this randomized version.
The NERVE learning activity consisted of distinct components, during which students worked in their respective treatments. Students were first exposed to a 10-minute interactive, self-paced tutorial designed to introduce the rules of system engagement, methods to enhance the communication exchange between the user and the VP, and guided instruction and practice for performing physical examination maneuvers within the virtual environment. Students then proceeded to the unguided learning exercises which revolved around a series of three VP scenarios: (1) non-CN-related visual abnormality, (2) CN III abnormality, and (3) CN VI abnormality. For each VP, students gathered history information, performed a physical examination, and completed an online patient case note that requested documentation of (1) patient history, (2) pertinent positives, (3) pertinent negatives, and (4) diagnosis and justification. Following submission of each case note, detailed feedback was provided to the students using Flash-based modules created by the author (JC) that outlined key case information to reflect the diagnostic thought process (Figure 3). After reviewing each feedback module, students moved on to the next case; the case sequence was consistent for all students. Although participants were allowed to self-pace through this series, the entire learning session, including three VP scenarios, case note form submission, and feedback review, lasted an average of 100 min.
Immediately following the learning session, students completed a CN abnormality knowledge post-test containing the same 12 items as the pre-test, but presented in a different random order. Lastly, students were presented with a survey to offer their perspectives on the educational experience, including items rating overall satisfaction, degree of learning, and utility of various system elements. Ratings were based on a five-point Likert-type scale, where 1 = strongly disagree and 5 = strongly agree. Open-ended items were also included on the survey to allow students to comment on strengths, weaknesses, and suggestions for improvement of the activity in general, and perceived advantages and disadvantages to participating in the activity in their relative treatments. All study instruments were created and deployed online via Survey Monkey™.
Differences in treatment protocol
The basic study procedure and materials were the same for both treatments; however, there were two primary features which distinguished the team treatment: (1) instructions provided to students in the team treatment differed from the individual treatment in that team members were explicitly told that they should work together as a team to complete the tutorial, simulation scenarios, and feedback review, and that they were free to share ideas and assist one another in determining content for inclusion on their case note forms (although every student was still instructed to submit individual case note forms, regardless of treatment); and (2) one NERVE computer station was shared by all team members, rather than each individual having direct access to control the simulation; teams were afforded the latitude to determine their own plan for system control during the activity. All students had access to their college-issued laptops for individual submission of the pre-training questionnaire, knowledge pre-test, case notes, knowledge post-test, and post-simulation survey (Figure 2).
Statistical analysis
Treatment differences
Analysis of covariance using pre-test score as a covariate was originally planned for investigating differences in post-test scores between treatments; however, heterogeneity of slopes suggested the presence of an aptitude-treatment interaction (ATI) effect (i.e., treatments appeared to have a differential effect on students’ post-test performance depending on student aptitude, or pre-test score; Cronbach & Snow 1981; Pedhazur & Schmelkin 1991). Regression analysis confirmed the presence of a significant ATI effect. Simultaneous regions of significance were subsequently calculated to identify the pre-test score ranges for which treatments differed significantly on the post-test. Calculations were based upon formulae constructed by Potthoff (1964) as a modification to the Johnson–Neyman procedure (Aikens & West 1991; Pedhazur & Schmelkin 1991; D’Alonzo 2004). Independent samples t-tests were used as follow-up tests to further describe the inferences of such calculations for a sub-set of the sample.
Student perspectives
Likert-type ratings on the survey were treated as interval-level data; accordingly, independent samples t-tests were used to evaluate between-group differences. Responses to open-ended survey items were explored through thematic content analysis, and nominal survey data were examined with chi-squared analysis.
Alpha was set at 0.05 for all tests of statistical significance, and all analyses were conducted using the Statistical Package for the Social Sciences 20.0 (SPSS, Chicago, IL).
Results
All data were first screened for missing cases and extreme values; knowledge test scores were also explored for checking the assumptions of appropriate statistical tests. One student in the individual treatment was excluded from analysis related to treatment differences due to missing post-test data. An additional student from the individual treatment, identified as a bivariate outlier (i.e., on pre-/post-test scores), was excluded from this same analysis due to a combination of comparatively high discrepancy, moderate leverage, and high influence on the regression solution. Accordingly, the sample for this analysis included 55 students (i.e., individual, n = 25; team, n = 30). The full sample of 57 students was used for survey analysis.
Treatment differences
A visual examination of pre-/post-test score scatterplots suggested heterogeneity of regression slopes. As preliminary regression analysis confirmed a significant pre-test score by treatment interaction effect (p = 0.02), a modification of the Johnson–Neyman procedure was applied to determine the ranges of pre-test scores for which treatments differed significantly on post-test scores. The crossover point of regression lines occurred at a pre-test score of 9.1, the lower bound of the region was 6.9, and the upper bound of the region was 11.6 (Figure 4). Accordingly, at pre-test scores below 6.9, students in the team treatment scored significantly higher on the post-test than did students in the individual treatment; conversely, at pre-test scores above 11.6, the interpretation is that students in the individual treatment would out-perform students in the team treatment on the post-test. However, the only score attainable beyond 11.6 is a perfect pre-test score of 12, which renders this region of significance impractical for our consideration. Post-test scores did not differ significantly between treatments when pre-test scores occurred in the range of 6.9–11.6.
Sixteen students (i.e., individual, n = 9; team, n = 7; 29% of analysis sample) scored ≤6/12 items (50%) at the time of pre-test. For this sub-set of the sample, mean pre-test scores did not differ significantly between individual and team treatments (p = 0.68; 42% and 40%, respectively; Figure 5); however, as the simultaneous regions of significance initially identified, students in the team treatment scored significantly higher (83%) than did students in the individual treatment (62%) at the time of post-test (p = 0.02; mean difference = 21% [95% confidence interval = 3–39%]).
Student perspectives
Overall, students reported being satisfied with the learning experience, and agreed that the simulation was useful for learning and engagement (Table 1). Mean ratings of additional system usability survey items are also displayed in Table 1 by treatment; no significant differences between treatments were observed for any item in this usability series.
Table 1.
Survey item | Individual (n = 27) mean | Team (n = 30) mean |
---|---|---|
I am satisfied with the NERVE clinical skills experience | 4.0 | 4.1 |
The simulation scenarios held my attention | 4.1 | 4.1 |
I feel as though I learned from the simulation experience | 4.0 | 4.3 |
The tutorial adequately prepared me to use the simulation | 4.1 | 4.2 |
The mouse-based interface was intuitive | 3.7 | 4.1 |
The VP responded appropriately to my questions | 3.2 | 2.9 |
The VP responded appropriately to physical exam directions | 3.8 | 4.0 |
I am confident in my ability to correctly diagnose a patient with a CN III or CN VI injuryb | 3.4 | 4.0 |
Notes:
Five-point Likert-type rating scale, where 1 = strongly disagree and 5 = strongly agree.
Mean ratings of confidence differ significantly between treatments, p = 0.02.
Post-training, students were also asked to rate the extent to which they agreed to feeling confident in their ability to correctly diagnose a patient with a CN III or CN VI injury, on a five-point Likert-type scale where 1 = strongly disagree and 5 = strongly agree. Students in the team treatment reported feeling significantly more confident (mean = 4.0) than did students in the individual treatment (mean = 3.4; p = 0.02; Table 1).
Thematic content analysis of qualitative data revealed common feedback to open-ended items appearing on the survey (Table 2). Common responses are reported in Table 2 when more than two students from the total sample offered a similar feedback statement.
Table 2.
Response by item | Total (n = 57) n (%) |
Individual (n = 27) n (%) |
Team (n = 30) n (%) |
---|---|---|---|
What did you like most about NERVE? | |||
Facilitation of learning | 11 (19%) | 4 (15%) | 7 (23%) |
Communication with VP | 11 (19%) | 8 (30%) | 3 (10%) |
Observation of abnormal findings | 9 (16%) | 1 (4%) | 8 (27%) |
Ease of system use | 7 (12%) | 4 (15%) | 3 (10%) |
Presentation of immediate feedback | 6 (10%) | 4 (15%) | 2 (7%) |
Reinforcement of Year 1 content | 6 (10%) | 2 (7%) | 4 (13%) |
Engagement – fun, interesting, novel | 5 (9%) | 2 (7%) | 3 (10%) |
Interactive nature of system | 5 (9%) | 2 (7%) | 3 (10%) |
Collaboration afforded by team setting | 4 (7%) | – | 4 (13%) |
Opportunity to practice physical exam | 3 (5%) | 2 (7%) | 1 (3%) |
Ability to work at own pace | 3 (5%) | 1 (4%) | 2 (7%) |
What did you like least about NERVE? | |||
Communication with VP – limited response set | 42 (74%) | 16 (59%) | 26 (87%) |
Communication with VP – typing feature | 3 (5%) | 2 (7%) | 1 (3%) |
Lack of review of content prior to exercise | 3 (5%) | 2 (7%) | 1 (3%) |
Tutorial – long, not interactive, limited examples | 3 (5%) | 2 (7%) | 1 (3%) |
What changes would you like to see for the NERVE simulation exercises to make them more beneficial? | |||
Communication with VP – expand response set | 26 (46%) | 12 (44%) | 14 (47%) |
Exam tools and functions – include more | 6 (10%) | 3 (11%) | 3 (10%) |
Feedback – expand, imbed, individualize | 6 (10%) | 4 (15%) | 2 (7%) |
Application to CN injuries – include other nerves | 5 (9%) | 2 (7%) | 3 (10%) |
What were the benefits of participating in the NERVE simulation exercises independently? | |||
Ability to work at own pace | – | 14 (52%) | – |
Facilitation of independent/critical thinking | – | 6 (22%) | – |
What were the disadvantages of participating in the NERVE simulation exercises independently?b | |||
Absence of others – to ask for help | – | 18 (67%) | – |
Absence of others – to hear opinions | – | 3 (11%) | – |
Absence of others – to be more efficient | – | 3 (11%) | – |
What were the benefits of participating in the NERVE simulation exercises in a team? | |||
Presence of others – to ask for help | – | – | 16 (53%) |
Presence of others – to share and discuss ideas | – | – | 15 (50%) |
What were the disadvantages of participating in the NERVE simulation exercises in a team?b | |||
Longer duration of activity | – | – | 8 (27%) |
Lack of independent thinking/self-direction | – | – | 6 (20%) |
Sharing one computer | – | – | 5 (17%) |
Notes:
Response reported when >2 students from total sample offered similar comment.
10/27 (37%) students in the individual treatment indicated that they would like to experience NERVE as a member of a small team; 7/30 (23%) students in the team treatment indicated that they would like to experience NERVE as an individual; no significant differences in distribution of preferences by treatment group, χ2 = 1.28, p = 0.26.
Discussion
Our investigation into the utility of VPs, as presented by the NERVE system, to facilitate student learning in the areas of neurological examination and CN injury diagnosis yielded important outcomes for us, both as medical educators and as simulation system developers. We acknowledge, however, that the conclusions drawn from this investigation may be limited in their generalizability due to study-specific factors (e.g., analysis of a small cohort of students at a specific point in Year 2 of their medical education in an integrated curriculum at a single institution; exposure to VPs specifically as they are imbedded and configured in the NERVE system; presentation of tailored case-related feedback to supplement the learning experience afforded by NERVE; use of the knowledge test constructed to match the objectives of the P-2 module activity as a measure of learning achievement in this study).
Based on student performance on the knowledge post-test and feedback on the student perspectives survey, we deem NERVE to be a valuable educational tool, in general, and especially as it relates to affording opportunities for practicing physical examination procedures in a risk-free environment and for observing abnormal pathology not otherwise readily available to us through live patient encounters.
More specifically, the presence of a significant ATI effect allows us to begin defining best practices for the integration of VP simulators into the medical curriculum. Due to significant differences in knowledge post-test scores between treatments at specific aptitude ranges, significant differences in post-simulation confidence ratings between treatments, and student feedback related to benefits and disadvantages of participating in a specific treatment, we are persuaded to implement NERVE exercises in small team settings with all medical students in the future. Optimal implementation, however, will require further consideration and study on our part for the following reason: 7/10 teams formed by random assignment contained at least one student scoring above the pre-test threshold of 6.9 (i.e., comparatively high prior knowledge or “aptitude”) and at least one student scoring below the threshold (i.e., comparatively low prior knowledge or “aptitude”); the remaining three teams contained at least one student scoring above the threshold and at least one student scoring near the threshold, with a pre-test score of 7 or 8; given the relatively heterogeneous nature of composition across all teams, we are currently unable to determine the impact that such team heterogeneity may have on facilitating individual student success.
Vygotsky’s (1978) zone of proximal development concept interests us here from a theoretical standpoint – did lower aptitude students realize such gains in knowledge post-test scores because they worked in teams with at least one higher aptitude student? We hypothesize that this is the case; however, we also recognize the importance of determining if students in homogeneous teams (i.e., specifically all members with lower aptitudes) derive similar learning benefits. Accordingly, additional exploration in this area is warranted.
Depending upon the outcomes of such investigations, we are prepared to use the knowledge pre-test as a screening tool for aptitude prior to small team-based learning activities with NERVE in this context – heterogeneous teams would be purposely formed based on these aptitude scores. The next step for us as educators then, is to define and ensure educational benefits for higher aptitude students as well.
As simulation system developers, we find the ratings and feedback reported on the student perspectives survey to be critical in guiding future improvements to NERVE and in identifying new domains for system design. Of particular importance are the comments related to the limited recognition and response set of the VP’s vocabulary, especially as the primary educational purposes of the NERVE system include the opportunity to practice history-taking and communication skills. Accordingly, developments are currently underway to improve upon the interviewing component. The updated version of NERVE will include an answer recognition system that presents the user with a set of “close matches” when the VP does not readily recognize the question or directive entered in the chat window. For example, a user may ask, “Is your vision blurry or double at night?” and the VP will respond with, “Did you mean (1) Do you have blurry vision?, (2) Do you see double?, or (3) How is your vision at night?” Users may then choose from among these “close matches” or may re-phrase their original entry.
We are encouraged to continue to refine and expand the NERVE tool, and to explore additional educational aspects and contextual factors that may assist us in defining its optimal use in the medical curriculum. Moreover, we are prompted to carefully consider the social context of educational activities and the potential for ATI effects with future use of other simulation systems and novel pedagogical tools.
Practice points.
Simulation in medical education provides students with opportunities to practice interviews, examinations, and diagnosis formulation of rare and complex medical conditions without risk of patient harm.
VP simulators, in particular, hold great promise for use in the medical curriculum as they offer venues for training in the diagnosis of neurological abnormalities, such as CN disorders, that standardized patients are not able to feign.
At relatively lower aptitudes, medical students achieved significantly greater learning gains following team-based learning experiences with VPs, as compared to students who interacted with VPs as individuals.
Following simulation use, students who interacted with VPs in a team setting reported significantly greater confidence in their abilities to diagnose CN palsies than did students who interacted with VPs as individuals.
Careful consideration should be given to determining the optimal social context in which educational activities are arranged, and to recognizing the potential for ATI effects with use of simulation systems and other novel pedagogical tools.
Acknowledgments
The authors thank the following individuals for their important contributions to this study: Aaron Kotranza from the Department of Computer and Information Science and Engineering in the College of Engineering at the University of Florida for his system design work and continued development of the NERVE system; Michael Eakins from the Institute for Simulation and Training at the University of Central Florida, and Daniel Sagendorf from the Clinical Skills and Simulation Center in the College of Medicine at the University of Central Florida for their ongoing development and testing of the NERVE system, and assistance in execution of the study.
The authors are supported by a National Library of Medicine grant, R01 LM01813.
Footnotes
Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.
Notes on contributors
TERESA R. JOHNSON, PhD, is an Assistant Professor of Medical Education in the College of Medicine, University of Central Florida, Orlando, FL, USA.
REBECCA LYONS, BS, is a Graduate Research Associate and Doctoral Candidate in the Industrial and Organizational Psychology Program, Institute for Simulation and Training, University of Central Florida, Orlando, FL, USA.
JOON HAO CHUAH, BS, is a Research Assistant with the Virtual Experiences Research Group in the Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, USA.
REGIS KOPPER, PhD, is a Post-Doctoral Associate with the Virtual Experiences Research Group in the Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, USA.
BENJAMIN C. LOK, PhD, is an Associate Professor and leads the Virtual Experiences Research Group in the Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, USA.
Juan C. Cendan, MD, is the Assistant Dean of Simulation, Medical Director of the Clinical Skills and Simulation Center, and Associate Professor of Surgery in the College of Medicine, University of Central Florida, Orlando, FL, USA.
References
- Aiken LS, West SG. Multiple regression: Testing and interpreting interactions. Thousand Oaks, CA: Sage; 1991. [Google Scholar]
- Association of American Medical Colleges (AAMC) Effective use of educational technology in medical education. [Accessed 4 January 2012];Colloquium on educational technology: Recommendations and guidelines for medical educators. 2007 Available from: https://www.aamc.org/members/gea/regions/sgea/
- Cook DA, Triola MM. Virtual patients: A critical literature review and proposed next steps. Med Educ. 2009;43(4):303–311. doi: 10.1111/j.1365-2923.2008.03286.x. [DOI] [PubMed] [Google Scholar]
- Cronbach LJ, Snow RE. Aptitudes and instructional methods: A handbook for research on interactions. New York, NY: Irvington Publishers; 1981. [Google Scholar]
- D’Alonzo KT. The Johnson–Neyman procedure as an alternative to ANCOVA. West J Nurs Res. 2004;26(7):804–812. doi: 10.1177/0193945904266733. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gelb DJ, Gunderson CH, Henry KA, Kirshner HS, Józefowicz RF. The neurology clerkship core curriculum. Neurology. 2002;58(6):849–852. doi: 10.1212/wnl.58.6.849. [DOI] [PubMed] [Google Scholar]
- Gokhale AA. Collaborative learning enhances critical thinking. J Technol Educ. 1995;7(1):22–30. [Google Scholar]
- Kotranza A, Johnsen K, Cendan J, Miller B, Lind DS, Lok B. Virtual multi-tools for hand and tool-based interaction with life-size virtual human agents. IEEE Symposium on 3D User Interfaces; 14–15 March; Lafayette, LA. 2009. pp. 23–30. [Google Scholar]
- Kraiger K. Transforming our models of learning and development: Web-based instruction as an enabler of third-generation instruction. Ind Organ Psychol. 2008;1(4):454–467. [Google Scholar]
- Kutschke PJ. Taking a history of the patient with diplopia. Insight. 1996;21(3):92–95. doi: 10.1016/s1060-135x(96)90047-0. [DOI] [PubMed] [Google Scholar]
- Larsen PD, Stensaas SS. Movies drawn from the NeuroLogic Exam website (used with permission from Paul D. Larsen, M.D., University of Nebraska Medical Center and Suzanne S. Stensaas, Ph.D., University of Utah School of Medicine. The movies are licensed under a Creative Commons Attribution-NonCommerical-ShareAlike 2.5 License) 2012 Available from http://library.med.utah.edu/neurologicexam/html/introduction.html.
- Liaison Committee on Medical Education (LCME) [Accessed 4 January 2012];Functions and structure of a medical school: Standards for accreditation of medical education programs leading to the M.D. degree. 2011 Available from http://www.lcme.org/functions2011may.pdf.
- Moore F, Chalk C. The essential neurologic examination: What should medical students be taught? Neurology. 2009;72(23):2020–2023. doi: 10.1212/WNL.0b013e3181a92be6. [DOI] [PubMed] [Google Scholar]
- Pedhazur EJ, Schmelkin LP. Measurement, design, and analysis: An integrated approach. Hillsdale, NJ: Lawrence Erlbaum Associates; 1991. [Google Scholar]
- Potthoff RF. On the Johnson–Neyman technique and some extensions thereof. Psychometrika. 1964;29(3):241–256. [Google Scholar]
- Schon F, Hart P, Fernandez C. Is clinical neurology really so difficult? J Neurol Neurosurg Psychiatry. 2002;72(5):557–559. doi: 10.1136/jnnp.72.5.557. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vygotsky LS. Interaction between learning and development. In: Cole M, John-Steiner V, Scribner S, Souberman E, editors. Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press; 1978. pp. 79–91. [Google Scholar]