Abstract
Background
Virtual patients are interactive digital simulations of clinical scenarios for the purpose of health professions education. There is no current collated evidence on the effectiveness of this form of education.
Objective
The goal of this study was to evaluate the effectiveness of virtual patients compared with traditional education, blended with traditional education, compared with other types of digital education, and design variants of virtual patients in health professions education. The outcomes of interest were knowledge, skills, attitudes, and satisfaction.
Methods
We performed a systematic review on the effectiveness of virtual patient simulations in pre- and postregistration health professions education following Cochrane methodology. We searched 7 databases from the year 1990 up to September 2018. No language restrictions were applied. We included randomized controlled trials and cluster randomized trials. We independently selected studies, extracted data, and assessed risk of bias and then compared the information in pairs. We contacted study authors for additional information if necessary. All pooled analyses were based on random-effects models.
Results
A total of 51 trials involving 4696 participants met our inclusion criteria. Furthermore, 25 studies compared virtual patients with traditional education, 11 studies investigated virtual patients as blended learning, 5 studies compared virtual patients with different forms of digital education, and 10 studies compared different design variants. The pooled analysis of studies comparing the effect of virtual patients to traditional education showed similar results for knowledge (standardized mean difference [SMD]=0.11, 95% CI −0.17 to 0.39, I2=74%, n=927) and favored virtual patients for skills (SMD=0.90, 95% CI 0.49 to 1.32, I2=88%, n=897). Studies measuring attitudes and satisfaction predominantly used surveys with item-by-item comparison. Trials comparing virtual patients with different forms of digital education and design variants were not numerous enough to give clear recommendations. Several methodological limitations in the included studies and heterogeneity contributed to a generally low quality of evidence.
Conclusions
Low to modest and mixed evidence suggests that when compared with traditional education, virtual patients can more effectively improve skills, and at least as effectively improve knowledge. The skills that improved were clinical reasoning, procedural skills, and a mix of procedural and team skills. We found evidence of effectiveness in both high-income and low- and middle-income countries, demonstrating the global applicability of virtual patients. Further research should explore the utility of different design variants of virtual patients.
Keywords: computer simulation, professional education, computer-assisted instruction, systematic review, meta-analysis
Introduction
Background
Health care education is confronted with many global challenges. Shorter hospital stays, specialization of care, higher patient safety measures, and shortage of clinical teachers all diminish the traditional opportunities for the training of health professions through direct patient contact [1,2]. Early health professions education is often dominated by theoretical presentations with insufficient connection to clinical practice [3]. The need to increase numbers and quality of the health workforce is especially visible in low-and-middle-income countries, where the need to scale up high-quality health education and introduce educational innovations is most pressing [4]. Therefore, the global medical education community is perpetually searching for methods that can be applied to improve the relevance, increase the spread, and accelerate the educational process for health professions [5].
Digital education (often referred to as e-learning) is “the act of teaching and learning by means of digital technologies” [6]. It encompasses a multitude of educational concepts, approaches, methods, and technologies. Digital health education comprises, for example, offline learning, mobile learning, serious games, or virtual reality environments. We have conducted this systematic review as part of a review series on digital health education [6-19] and focused it on the simulation modality called virtual patients.
Virtual patients are defined as interactive computer simulations of real-life clinical scenarios for the purpose of health professions training, education, or assessment [20]. This broad definition encompasses a variety of systems that use different technologies and address various learning needs [21]. The learner is cast into the role of a health care provider who makes decisions about the type and order of clinical information acquired, differential diagnosis, and management and follow-up of the patient. Virtual patients are hypothesized to primarily address learning needs in clinical reasoning [22,23]. However, an influence of the use of virtual patients on other educational outcomes has been reported in previous literature [21,24].
The educational use of virtual patients may be understood through experiential learning theory [25,26]. Following this theoretical model of action and reflection, virtual patients expose learners to simulated clinical experiences, providing mechanisms for information gathering and clinical decision making in a safe environment [27]. Exposing the learner to many simulated clinical scenarios supports learning diagnostic processes [28] while acquainting learners with a standardized set of clinical conditions common in the population, but rare or nonaccessible in highly specialized teaching hospitals [29].
Some concerns have been raised about educational use of virtual patients. Virtual patients should not replace but complement contact with real patients [27]. There are concerns around the use of virtual patients potentially resulting in less empathic learners [30]. The use of unfamiliar technology as part of virtual patients’ education can represent a barrier to learning, even for younger generations [31,32]. Virtual patients may also prove ineffective when technological objectives drive teaching instead of being motivated by learning needs [33].
This virtual patient simulation review has been preceded by several narrative reviews [22,34-37] and 2 systematic reviews with meta-analyses [38,39]. Our preliminary literature analysis showed that the number of studies including the term virtual patient or virtual patients has more than doubled on the MEDLINE database in comparison with available evidence provided in previous systematic reviews (February 2009 [38] and July 2010 [39]). Thus, our review will update the evidence base with studies not included in previous analyses.
Objectives
The objective of this review was to evaluate the effectiveness of virtual patient simulation for delivering pre- and postregistration health care professions education using the following comparisons:
Virtual patient versus traditional education
Virtual patient blended learning versus traditional education
Virtual patient versus other types of digital education
Virtual patient design comparison
By traditional education, we mean all nondigital educational methods. This includes lectures, reading exercises, group discussion in classroom, and nondigital simulation as standardized patients or mannequin-based training. Virtual patient blended learning is the addition of virtual patients as a supplement to traditional education when the control intervention uses nondigital education methods only. Other types of digital education may include interventions such as video recordings, Web-based tutorials, or virtual classrooms.
We assessed the impact of virtual patient interventions on learners’ knowledge, skills, attitude, and satisfaction. Our secondary objective was to assess the cost-effectiveness, patient outcomes, and adverse effects of these interventions.
Methods
Protocol and Registration
While conducting the review, we adhered to the Cochrane methodology [40], followed a published protocol [41], and presented results following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines [42].
Eligibility Criteria
We included randomized controlled trials (RCTs) and cluster RCTs (cRCTs). We excluded crossover trials because of the high likelihood of carryover effect.
Participants in the included studies had to be enrolled in a pre- or postregistration health-related education or training program (see glossary in Multimedia Appendix 1). This included students from disciplines such as medicine, dentistry, nursing and midwifery, medical diagnostic and treatment technology, physiotherapy and rehabilitation, and pharmacy.
This review focused on screen-based virtual patient simulations that form a computerized, dynamically unfolding representation of patient cases. A virtual patient simulation is introduced by a case description and might contain answers given by the patient, clinical data (eg, laboratory results, medical images), and descriptions of patients’ signs and symptoms. Only the representations of the patient as a whole were of interest, rather than studies that focused on single parts of the body. As a matter of a policy followed in the Digital Health Education Collaboration [6] and aiming at avoiding duplication of reviews, we deliberately excluded virtual patients in 3-dimensional (3D) virtual learning environments from this study. We judged that a higher level of immersion of learners in 3D virtual environments, connected with potential technical challenges (eg, difficulties in navigating such environments, lags because of increased computational time or limited internet bandwidth), was likely to influence the educational outcomes and therefore merited a separate analysis covered already by the virtual reality review [13] of this Digital Health Education Collaboration series. We also excluded those virtual patient interventions which require nonstandard equipment (eg, haptic devices, mannequins) or those virtual patients which are human controlled (eg, simulated email correspondence or chat room conversations). We excluded studies in which virtual patients were just a small part of the intervention and those in which the influence of virtual patients was not evaluated separately.
Furthermore, 2-arm RCTs comparing virtual patients with a control group not involved in any type of subject-related learning activity were not considered eligible as previous meta-analyses have already shown a large positive effect when virtual patients were compared with no intervention [38].
We decided to introduce to the review a comparison of virtual patients blended learning with traditional education as a consequence of the discussion in the community on the need to eliminate traditional types of learning activities to make space for virtual patients. For instance, Berman et al [29] noticed that the students’ subjective learning effect perceptions and satisfaction with integration were lower at universities that increased the workload of students by adding virtual patients without releasing time resources in the curriculum. As most of health professions education is conducted on campus, an integrated effect of virtual patients is possible. Blending virtual patients with traditional education is challenging and qualitatively different than a nonintervention control group comparison.
Eligible primary outcomes were students’ (1) knowledge, (2) skills, (3) attitudes, and (4) satisfaction—together representing clinical competencies measured post intervention with validated or nonvalidated instruments. Secondary outcomes were (1) economic cost and cost-effectiveness, (2) patient outcomes, and (3) observed adverse effects.
Search Methods for Identification of Studies
We searched the following 7 databases: MEDLINE (via Ovid), EMBASE (via Elsevier), The Cochrane Library (via Wiley), PsycINFO (via Ovid), Educational Resource Information Centre (ERIC; via Ovid), Cumulative Index to Nursing and Allied Health Literature (CINAHL; via EBSCO), and Web of Science Core Collection (via Thomson Reuters). We adapted the MEDLINE strategy and keywords presented in Multimedia Appendix 2 for use with each of the databases above. We searched databases from the year 1990 to September 20, 2018 to highlight recent developments and did not apply language restrictions. For all included studies, we searched references lists and conducted author and citation searches. We searched lists of references from other identified relevant systematic reviews while running our electronic searches.
Data Collection and Analysis
Data Selection, Extraction, and Management
The search results were combined in a single EndNote library (version X7; Thomson Reuters) [43]. Overall, 2 authors independently screened titles and abstracts to identify potentially eligible studies. In the next phase, full-text versions of these papers were retrieved and 2 review authors independently assessed these papers against eligibility criteria. We piloted data extraction to maximize consistency in the information extracted. Disagreements were resolved through discussion. A third review author was consulted to arbitrate when differences in opinion arose. All relevant data were extracted using a structured form in Microsoft Excel. We contacted study authors for crucial missing information, particularly if required to judge inclusion criteria and study outcomes.
Data Items
Information was extracted from each included study on (1) the characteristics of study participants (field of study; stage of education: pre/postregistered; year of study; and country where the study was conducted and its World Bank income category: high-income/low-and-middle-income country), (2) the type of outcome measure (type of tool used to measure outcome and information on whether the tool was validated), (3) the type of virtual patient intervention (topic and language of presented virtual patient simulations; information on whether the language of virtual patient was native to the majority of participants; source of virtual patient simulations: internal/external; was the study an individual or group assignment, and in case of group assignments, the number of students in a group; whether access to virtual patient simulation was from home or in a computer laboratory; number of virtual patient cases presented; time when the virtual patients were available; and duration of use of virtual patients), and (4) the type of virtual patient system (name of the system; navigation scheme: linear, branched, and free access; control mechanism: menu-based, keyboard, or speech recognition; feedback delivery and timing; and whether video clips where included in virtual patient cases). A glossary of the terms in use in the review may be found in Multimedia Appendix 1.
Measures of Treatment Effect
We reported the treatment effects for continuous outcomes as mean values and SDs post intervention in each intervention group, along with the number of participants and P values. As the studies presented data using different tools, the mean differences were recalculated into standardized mean difference (SMD). We interpreted the effect size as small (SMD=0.2), moderate (SMD=0.5), and large (SMD=0.8) effect sizes [40]. If studies had multiple arms and no clear main comparison, we compared the virtual patient intervention arm with the most common control arm, excluding the nonintervention and mixed-intervention controls. If that was impossible to decide, we selected the least active control arm. If multiple outcomes in the same category (knowledge, skills, attitudes, and outcomes) were reported, we selected the primary measure, and if that was impossible, we calculated the mean value of all measures. For papers that reported median and range for the outcomes, we converted these to mean and SD using methods described by Wan [44]. If a study did not report SD but provided CIs, we estimated SD from those using a method described in previous literature [40].
Data Synthesis and Analysis
Owing to the significant differences between studies, we employed a random-effects model in the meta-analysis using Review Manager (version 5.3; The Nordic Cochrane Centre) [45]. We displayed the results of the meta-analysis in forest plots and evaluated heterogeneity numerically using I2 statistics. For comparisons with more than 10 outcomes in the meta-analysis, we attempted a subgroup analysis. As the planned 15 subgroup analyses in the protocol [41] did not explain the heterogeneity, we visualized the outcomes using albatross plots [46]. These plots were implemented using a script created for the purpose of the study by one of the review authors (AK) in the statistical package R (version 3.4.3; R Foundation for Statistical Computing) [47]. This explorative approach resulted in a new subgroup analysis in which we divided the control interventions into active (group discussion, mannequin-based simulation) and passive (lectures, reading assignments). Findings unsuitable for inclusion in a meta-analysis (eg, comparison of individual items in surveys) were presented using a narrative synthesis.
Assessment of Risk of Bias
Two authors independently assessed the risk of bias using the Cochrane tool [40]. We considered the following domains: random sequence generation, allocation sequence concealment, blinding of participants or personnel, blinding to outcome assessment, completeness of outcome data, selective outcome reporting, and other sources of bias (eg, differences in baseline evaluation, volunteer bias, commercial grants). For cRCTs, we also assessed the risk of the following additional biases: recruitment bias, baseline imbalance, loss of clusters, incorrect analysis, and comparability with individually randomized trials. The publication bias in our review was difficult to investigate in a formal way because of high levels of heterogeneity which limit the interpretation possibilities of funnel plots.
Summary of Findings Tables
We prepared summary of findings tables to present results of the meta-analysis [40]. We presented the results for major comparisons of the review and for each of the major primary outcomes. We considered the Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria to assess the quality of the evidence and downgraded the quality where appropriate [40].
Results
Included Studies
Our searches yielded a total of 44,054 citations, and 51 studies with 4696 participants were included (Figure 1). Overall, 2 reports described results already included in the review [48,49].
Types of Studies
All included studies were published in peer-reviewed journals. All included studies had an RCT design, with the exception of 3 cRCTs [50-52].
Types of Comparisons
A total of 25 studies compared virtual patients with traditional education [53-77], 11 compared a blend of virtual patients and traditional education with traditional education [50,52,78-86], 5 studies compared virtual patients with different forms of digital health education [51,87-90], and 10 studies compared different types of virtual patient interventions [91-100].
The traditional education control group involved a reading assignment in 6 studies [59,60,62,67-69]; 4 studies each involving a lecture [63,66,72,77], group assignment [53,65,71,73], and mannequin-based training [58,64,70,76]; and 1 each involving standardized patients [74] and ward-based education [75]. In 5 studies, the intervention was a mix of different forms of traditional education (eg, lecture, small group assignment, and mannequin-based training) [54-57,61].
The digital education control group was in 2 studies—a Web tutorial or course [88,90] and video recording [87,89]—and in 1 study, a mix of traditional lectures and Web materials including video clips [51] was used.
Studies comparing different types of virtual patients contrasted narrative with problem-solving structure of virtual patients [91]; virtual patients with and without usability enhancements [94]; different forms of feedback in virtual patients [95,96]; worked with unworked versions of virtual patients [97]; differences between self-determined and mandatory access to virtual patients [98]; virtual patients collections in which all the cases were presented at once to those automatically activated spaced in time [99]; effects of virtual patient solving with virtual patient construction exercises [100]; linear versus branched design of virtual patients [92]; and finally the addition of representation scaffolding (see glossary in Multimedia Appendix 1) to virtual patients [93].
Furthermore, 41 studies had 2 study arms (see the first table in Multimedia Appendix 3), 7 studies had 3 arms [62,88,91,95-98], and 3 studies had 4 arms [63,65,67].
Types of Participants
In total, 41 studies involved preregistered professionals (see the first table in Multimedia Appendix 3), with 8 studies focused on postregistered participants [68,69,74,76,78,90,94,97]; 2 studies involved both pre- and postregistered participants [59,87].
In 37 out of 51 studies, participants were from the field of medicine. The studies from fields other than medicine were as follows: 6 studies in nursing [58,64,70,73,78,80]; 2 in pharmacy [53,92]; and 1 each in physical therapy [61], osteopathic medicine [84], and dentistry [83]. In addition, 3 studies involved interprofessional education [74,76,90].
A total of 44 out of 51 studies were conducted in high-income countries; 19 were from the United States (see the first table in Multimedia Appendix 3); 5 from Germany [65,82,93,98,99]; 3 each from Australia [52,70,91] and Sweden [83,85,87]; 2 each from Canada [59,60], the Netherlands [86,88], and the United Kingdom [66,77]; and 1 study was conducted each in Belgium and Switzerland [92], Denmark [100], France [54], Hong Kong [51], Japan [67], Poland [50], Singapore [64], and Slovenia [71]. From the 7 studies conducted in low-and-middle-income countries, 3 were from China [63,73,80], 2 from Colombia [55,56], and 1 each from the Republic of South Africa [94] and Iran [75].
In Multimedia Appendix 4 we present the technical characteristics of virtual patient systems, topics of educational content presented, applied instructional design methods, setting of use, information on the validity of outcome measurement, and applied educational theories in the included studies. Multimedia Appendix 5 summarizes the reasons for excluding studies following a review of their full-text versions.
Effects of Interventions
Knowledge
In total, 33 studies assessed outcomes of knowledge. In all studies, knowledge was measured using paper-based tests (see the second table in Multimedia Appendix 3). In 19 studies, the test consisted of multiple-choice questions (MCQs). Other knowledge test designs contained multiple-response questions [100], true/false questions [50], and key feature format questions [82]. In 4 studies, the participants had to formulate free-text answers [75,85,94,99]. In 4 studies [63,66,93,97], the knowledge tests comprised a mix of different formats. Li et al [63] used a combination of multiple-choice and short answer questions; Miedzybrodzka et al [66] used MCQs and modified essays; Harris et al [97] applied MCQs with confidence levels combined with script concordance testing questions; and Braun et al [93] used a test consisting of multiple-choice items, key feature problems, and problem-solving tasks. Secomb et al [70] measured cognitive growth using a survey requiring selection of the most significant items regarding learning environment preferences. In 3 studies [62,73,92], the nature of the knowledge test was unclear. In the case of MCQs in which the nature of items was unclear or mixed, we classified the outcome as knowledge instead of, for example, clinical reasoning skills, but the borderline between those was sometimes blurred. Meta-knowledge (eg, knowledge about the clinical reasoning process itself) was classified as knowledge outcomes following the framework by Kraiger [101].
The effects of interventions on knowledge outcomes are summarized in the second table in Multimedia Appendix 3.
Virtual Patient Versus Traditional Education
In 4 [60,63,67,72] of 18 studies comparing virtual patients with traditional education, the intervention resulted in more positive knowledge outcomes. In 2, the control group attended a lecture [63,72], whereas in the remaining 2 studies, students participated in a reading exercise [60,67]. In 1 study [53], the control intervention arm (Problem-based learning (PBL) small group discussion) had significantly better results than the virtual patient intervention (SMD=−0.65, 95% CI −1.02 to −0.28, P=.001). In the remaining 13 studies, the difference did not reach a statistically significant level (see the second table in Multimedia Appendix 3).
We excluded 2 studies [62,65] from our meta-analysis because of missing crucial outcome data. Jeimy et al [59] presented outcomes of a knowledge test compared item-by-item and the study was therefore excluded from the meta-analysis. We also excluded 1 study [72] owing to its outlier value of SMD=12.5 being most likely because of reporting error and excluded another study [70] as we regarded meta-knowledge as very different from the other types of core knowledge outcomes.
The pooled effect for knowledge outcomes (SMD=0.11, 95% CI −0.17 to 0.39, I2=74%, n=927; Figure 2) suggests that virtual patient interventions are as efficient as traditional education.
Virtual Patient Blended Learning Versus Traditional Education
In 4 [50,52,80,82] of the 5 studies comparing virtual patients as a supplement with traditional education in the domain of knowledge, the group having the additional resource scored better than the control group. Only in 1 study [85] did the addition of virtual patients not lead to statistically significant difference in knowledge outcomes (P=.11).
The pooled effect for knowledge outcomes (SMD=0.73, 95% CI 0.24 to 1.22, I2=81%, n=439; Figure 3) suggests moderate effects preferring the mix of virtual patients with traditional education over traditional education alone.
Virtual Patient Versus Other Types of Digital Education
A total of 2 studies compared the difference in knowledge outcomes between virtual patients and digital health education interventions. Courteille et al [87] compared virtual patients with a video-recorded lecture, whereas Trudeau et al [90] compared with a static Web course. Neither of these comparisons showed significant differences in knowledge outcomes.
Virtual Patient Design Comparison
In total, 8 studies focused on detecting the difference between variants of virtual patient design in the domain of knowledge. Only in 1 study by Friedman et al [96] were the differences at a statistically significant level. In this study, the pedagogic design of virtual patients was better than problem-solving and high-fidelity designs (P<.01). Comparing linear and branched virtual patients [92], scaffolded versus nonscaffolded [93], worked and unworked examples [97], virtual patient with usability extensions [94], self-determined versus mandatory integration [98], spaced versus nonspaced release of cases [99], and virtual patient solving versus virtual patient design exercises [100] resulted in no significant differences in knowledge outcomes.
Skills
A total of 28 studies assessed skills outcomes (see the third table in Multimedia Appendix 3). Skills were assessed by performance on a mannequin in 9 studies [50,54,64,68,69,73,80,82,88], by performance on a live standardized patient in 8 studies [57,67,78,81,89,91,95,100], and performance on virtual patients [93] and real patients [83] in 1 study each. In 6 studies, outcomes were measured by a written assignment involving description of photographed clinical cases [63], radiographs [65], carrying out and structuring a mental state examination based on videotaped material [77], solving paper cases [74,86], and a modular paper-based test [75]. In 2 studies [55,56], outcomes were measured by a mix of paper cases and virtual patients. Kumta et al [51] combined computer-based assessment, objective structured clinical examination (OSCE), and clinical examination comprising patients in the ward into 1 score.
The effects of interventions on skills outcomes are summarized in the third table in Multimedia Appendix 3.
Virtual Patient Versus Traditional Education
In 9 of 14 studies comparing virtual patients with traditional education, the intervention resulted in better skills outcomes (see the third table in Multimedia Appendix 3). The virtual patient intervention showed larger effects than lectures [63,77], reading exercises [67-69], group discussions [73], and activities comprising traditional methods, including lectures or hands-on training with mannequins [54-56].
Those skills which improved were clinical reasoning [55,56,63,77], procedural skills [54,67-69], and a mix of procedural and team skills [73].
We did not include in the meta-analysis 2 studies with incomplete reported data [65,74]. We also excluded skills outcomes from the study by Haerling [58] as these were available for a randomly selected subgroup only and from Wang et al [76] as they were measured for teams of students only and not individually.
The pooled effect on skills outcomes was (SMD=0.90, 95% CI 0.49 to 1.32, I2=88%, n=897; Figure 4). Overall, this suggests that virtual patients have moderate to large positive effects in comparison with traditional education in the investigated types of skills.
Virtual Patient Blended Learning Versus Traditional Education
In 3 [82,83,86] out of 7 studies, the groups using virtual patients blended learning scored better than the control group in the skills domain. Lehmann et al [82] demonstrated significantly improved procedural skills (P<.001), whereas Weverling et al [86] reported on improved clinical reasoning skills (P<.001) and Schittek Janda et al [83] on communication skills (P<.01). Furthermore, 2 studies [78,80] involving nursing students showed no significant difference. The study by Bryant et al [78] evaluated communication skills (P=.38), whereas Gu et al [80] measured procedural skills (P>.05). We excluded 3 studies [50,81,83] from the meta-analysis because of insufficient data provided in the report or item-by-item comparison of a skills checklist.
The pooled effect for skills outcomes (SMD=0.60, 95% CI −0.07 to 1.27, I2=83%, n=247; Figure 5) suggests that virtual patients blended with traditional education have moderate positive effects in comparison with traditional education alone.
Virtual Patient Versus Other Types of Digital Education
Out of 3 studies comparing skills outcomes in virtual patients with other types of digital education studies, Kumta et al [51] showed a significant difference (P<.001). In this study, virtual patients were better than a range of different traditional teaching methods supplemented by Web content that included video clips, PowerPoint presentations, digital notes, and handouts. The target outcomes were clinical skills assessed by OSCE stations and examination of patients in the wards. In a study by Dankbaar et al [88], virtual patients were not significantly better in teaching procedural skill than an electronic module only (P>.05). Finally, in the study by Foster et al [89], virtual patients showed no significant difference when compared with video recordings in teaching communication skills (P>.05).
Virtual Patient Design Comparison
From the 4 studies that compared the influence of different virtual patient designs on skills outcomes, Foster et al [95] showed that virtual patients with emphatic feedback were significantly better in training communication skills than those virtual patients without feedback (P<.03). In the study by Bearman et al [91], narrative virtual patients were significantly better than problem-solving virtual patients in conveying communication skills (P=.03). In a study by Braun et al [93], the addition of representational scaffolding to a virtual patient intervention significantly improved diagnostic efficiency (P=.045). Finally, in the study by Tolsgaard et al [100], there was no significant difference in integrated clinical performance when students constructed or solved virtual patients (P=.54).
Attitudes
A total of 11 studies reported attitudinal outcomes (see the third table in Multimedia Appendix 1). The attitudes related to confidence, preparedness, comfort, self-efficacy, and perceived ability in topics such as history taking and clinical breast examination [79], diagnostic and management abilities [59], contrast reaction management and teamwork [76], ethical, legal, and communication issues [57,81], opioid therapy [90], cultural competence [84], procedural knowledge in pediatric basic life support [82], performing pharmacy triage [92], caring for distress disorders patients [74], and anxiety [77].
The effects of interventions on attitudinal outcomes are presented in the fourth table in Multimedia Appendix 3. Furthermore, 3 studies presented pooled scores on students‘ self-assessment. In the study by Lehmann et al [82], students felt more confident in their knowledge and skills on performing pediatric basic life support with additional access to virtual patients that supplemented their traditional course (P<.001). There were no significant differences in the remaining 2 studies focusing on communication-related self-efficacy [81] and attitudes related to opioid therapy [90].
In the study by Williams et al [77], more items related to self-assessment of competences (in dealing with ethical aspects and managing anxiety) were scored lower in the virtual patient group than in the traditional education groups. There were no differences in analyzed items related to attitudes in 4 studies [57,59,74,79]. In the study by Smith et al [84], the results regarding attitudes toward clinical cultural competence were presented separately for bilingual and English-speaking students, which makes it difficult to aggregate not knowing the number of bilingual students in each study group. However, the descriptive conclusion of the authors was that general cultural competence measures were the same for the virtual patient and control group. In 2 studies [76,92], the results were compared item-by-item and only within the groups (pre/posttest), not between the study groups.
Satisfaction
In total, 17 studies measured satisfaction resulting from an intervention (see the fifth table in Multimedia Appendix 3). All outcomes in this category were measured by satisfaction questionnaires. Different facets of satisfaction were measured, which we classified in the following 5 dimensions: general impression (global score or willingness to recommend), comfort in use (learning style preference, engagement or motivation, positive climate or safety, and enjoyment or pleasure), integration in curriculum (time constraints, relevance, and level of difficulty), academic factors (feedback quality, structure, and clarity), and satisfaction with technical features (usability and information technology readiness).
In 4 out of 17 studies evaluating the satisfaction of students receiving a virtual patient intervention, the result was presented as 1 aggregated score of several items. Furthermore, 3 of those studies compared different design variants of virtual patients. In the study by Friedman et al [96], the pedagogic format (menus, guided) resulted in higher satisfaction scores than the high-fidelity (free text, unguided) format (P<.01). There was no statistically significant difference between the virtual patients with and without usability enhancements [94] (P=.13) and solving versus constructing virtual patients [100] (P=.46). One study [58] presented comparison of virtual patients with mannequin-based training using a single score for student satisfaction and self-confidence in learning, showing no difference between the simulation modalities (P=.11).
In the remaining 13 out 17 studies, the survey responses were compared item-by-item. In 4 studies, the majority of the items indicated preference for the virtual patient intervention, in comparison with lecture [63], reading assignment [67], video-based learning [89], and Web tutorial [88]. In 7 studies, most items were indifferent between the groups [53,59,62,65,66,74,92]. In 1 study [76], most items (5 out of 6) in a satisfaction survey were better rated in the mannequin-based training than in the virtual patient group.
Secondary Outcomes
One study had cost-effectiveness as an outcome [58]. In 9 studies, statements were made regarding the cost of the intervention—either monetary or in development time [53,60,62,64-66,79,95]. Only 1 study provided numerical data on both types of intervention [95]. The comparison was qualitative in 3 studies [64,65,78]. In 5 studies, estimations of costs were made for the virtual patient group without contrasting it with the cost of the control intervention [53,60,62,66,79]. None of the included studies had patient outcomes or adverse effects as the main outcome measure. Even though none of the studies reported direct patient outcomes, in 2 studies, the participants were observed by raters while performing tasks on real patients as an outcome assessment [51,83]. In the study by Kumta et al [51], the score was included in more complex assessment (including MCQ tests and OSCE examination) and the patient-related outcome was not explicitly reported. In the study by Schittek Janda et al [83], first year students of dentistry were asked to perform history taking with real patients and were rated by the instructor. The patients’ perspective was, however, not considered. Even though none of the studies had adverse effects as the major outcome, 6 studies [53,55,67,70,84,88] reported findings related to noticed unexpected effects of the intervention.
Cost
Haerling [58] showed a better cost-utility ratio of US $1.08 for virtual patients versus US $3.62 for the mannequin-based training. Foster et al [95] compared the cost of human-provided (Mechanical Turk) feedback with backstory video feedback; the cost of human answers was US $0.05 per question assisted, whereas videos required 4 hours of development time and the license cost of a video game (Sims 3 by Electronic Arts). This does not provide a direct answer to the question of which method was more cost-efficient as it depends on the number of participants and time of use. It is also important to notice that the human-generated feedback in virtual patients showed positive effects on the communication skills outcomes, whereas the backstory video did not. Bryant et al [78] estimated, but without providing numerical evidence, that the cost of a virtual patient was similar to that of a course text that was eliminated by the new intervention. Liaw et al [64], without providing concrete numbers, noticed that despite “initial startup costs for developing the virtual patient simulation, its implementation was less resource intensive than the mannequin-based simulation.” The cost savings were because of reduced instructor time, use of expensive equipment, or simulation facilities. Maleck et al [65] saw cost savings in the virtual patient group because of spared radiograph printouts. The cost of the virtual patient intervention was expressed in hours of work; in 2 cases, the cost was 12 to 15 hours per virtual patient [53,60]; in 1 case it was 15 to 30 hours [62] and 100 hours in another [66]. The cost expressed in amounts of money was estimated at US $500 for content development and technical implementation [62] and US $4800 for a total clerkship restructuring, including adding virtual patients [60]. It is worth noticing that in both cases the virtual patients were developed by students. Deladisma et al [79] used in their study a virtual patient system that involved a speech recognition engine, tracked user’s body movements, and projected a life-sized avatar on the wall. The cost of the technology used in the pilot study (including 2 networked personal computers, 1 data projector, and 2 Web cameras) was estimated in 2006 to be less than US $7000 [102].
Patient Outcomes
In the study by Schittek Janda et al [83], an experienced clinician rated the professional behavior (language precision, order of question, and empathy) of first year students’ of dentistry toward real patients as significantly higher (P<.01) in the group having access to a supplementary virtual patient case than in the group that underwent standard instruction.
Adverse Effects
Dankbaar et al [88] hypothesize based on their study results that high-fidelity virtual patients may increase motivation, but at the same time be more distracting for novice students and by that impede learning. Authors of 2 studies [70,84] observe that the language of virtual patients might be a significant factor showing greater effects on nonnative English speaking and bilingual learners than in native English speakers. In the study by Qayumi et al [67], it is observed that that lower-achieving students benefit more from virtual patients than high performers. In the study by Botezatu et al [55], students knowing about the possibility of being assessed by virtual patients opposed being tested with paper cases. In the study by Al-Dahir et al [53], it is observed that analysis of individual learner traces in the virtual patient system negates benefits of social learning.
Subgroup Analysis
None of the initially planned subgroup analyses explained the heterogeneity of the results.
Among many analyzed aspects, we looked into differences regarding the efficiency of learning with virtual patients between the health professions disciplines. Most of the located studies involved students of medicine as participants. For instance, when comparing virtual patients with traditional education in the domain of skills, out of the 12 outcomes included for subgroup analyses, only 2 were from other health profession disciplines than medicine (ie, studies from nursing [64,73]). When analyzing knowledge outcomes out of the 12 included studies, 4 were nonmedical but represented 3 very different disciplines, nursing [58,73], pharmacy [53], and physiotherapy [61]. The conducted subgroup analyses showed no significant differences between the subgroups and high heterogeneity.
While analyzing aspects of instructional design implemented in the virtual patient scenarios, we were able to locate a very balanced number of studies implementing the narrative and problem-solving designs [91] in the domain of knowledge outcomes (6 studies in each branch). Yet, the pooled results showed no difference (narrative: SMD=0.12, 95% CI −0.41 to 0.64, I2=85%, n=525 versus problem solving: SMD=0.11, 95% CI −0.17 to 0.38, I2=51%, n=520; subgroup differences P=.97). Interestingly, when looking into the domain of skills outcomes, all studies had either the problem-solving or unclear design (in 2 cases). This might be an indication that narrative (linear, branched) virtual patients are seen as being better suited for knowledge outcomes rather than skills.
Finally, we were unable to see any pattern in efficiency when analyzing the timing of feedback as being either during activity or post activity. However, in almost half of the studies, we were unable to decide, based on the description of the intervention, which model of feedback was implemented or whether the study had a mixed (during/post activity) mode of providing feedback.
To further explore the reasons for heterogeneity, we visualized the outcomes in the form of albatross plots of the knowledge and skills outcomes for virtual patients to traditional education comparisons. Figure 6 presents an albatross plot for knowledge and Figure 7 for skills outcomes. Comparisons of virtual patients to passive forms of learning (reading exercises and lectures) tended to display large positive effect sizes, whereas those comparing virtual patients to active learning (group discussion or mannequin-based learning) show small effects or even negative effects (left hand side in the Figures 6 and 7 and Multimedia Appendix 6).
Risk of Bias
Following the Cochrane methodology [40], we have assessed the risk of bias in all included studies. The results of the analysis are summarized in Figure 8.
Overall, we do not consider allocation bias as a significant issue in the review as most of the studies either described an adequate randomization method (17 of 51 studies) or even when the description was unclear (31 of 51), it was judged unlikely that the randomization was seriously flawed. Performance bias in comparisons with traditional education is an issue but at the same time is impossible to avoid in this type of research. The blinding of participants in virtual patient design comparisons is possible, but those studies are still relatively uncommon (n=10). The risk of assessor bias was avoided in many studies by using automated or formalized assessment instruments. Consequently, we assessed the risk as low in 42 of 51 studies. However, it is often unclear whether the instruments (eg, MCQ tests, assessment rubrics) were properly validated. We felt that in the majority of studies, attrition bias was within acceptable levels (low risk in 36 of 51 studies). This does not exclude volunteer bias, which is likely to be common, but its influence is difficult to estimate. As there is little tradition of publishing protocols in medical education research, it was problematic to assess selective reporting bias, but we judged the risk as low in 35 out of 51 studies. We were unable to reliably assess publication bias considering the high heterogeneity of studies. None of the cRCT studies considered in the statistical analysis had corrections for clustering, but we have decreased the number of participants in those studies using a method from the Cochrane Handbook to compensate for that. We present more details of the risk of bias analysis in Multimedia Appendix 7.
We rated down the quality of evidence for knowledge and skills outcomes in virtual patients to traditional education comparison because of the high heterogeneity of included studies and limitations in study design (lack of participant blinding, nonvalidated instruments, and potential volunteer bias). For attitudinal and satisfaction outcomes and for other types of comparisons, we additionally rated down the quality as the outcomes were presented as independent items in questionnaires that were not amenable to statistical analysis or the analyses contained just a handful of studies and the CIs were wide. Summary of findings table (GRADE) are presented in Multimedia Appendix 8.
Discussion
Principal Findings
The aim of this review was to evaluate the effectiveness of virtual patients in comparison with other existing educational methods.
There is low quality evidence that virtual patients are at least as effective as traditional education for knowledge outcome and more effective for skills outcomes. On the basis of the visual analysis of albatross plots, we may hypothesize that replacing passive forms of traditional education with virtual patients brings more benefit than replacing active learning methods. We collected positive evidence of effectiveness from both high-income and low-and-middle-income countries demonstrating the global applicability of virtual patients. Students were generally satisfied with the use of virtual patients, but we also located studies in our review where the use of virtual patients was connected with diminished confidence.
The strength of our systematic review is the broad perspective which shows the landscape of RCTs in the domain of virtual patients. Our systematic review updates the evidence on virtual patient effectiveness, which was last summarized in a meta-analysis almost a decade ago.
Limitations
The limitation of our work is that the wide scope of the review does not allow nuances in the studies to be explored in detail. We were unable to make a firm assessment of publication bias. The high heterogeneity of the results leads to the conclusion that without further consideration of needs and implementation details, we cannot expect that the introduction of virtual patients will always lead to detectable positive outcomes. Evidence to determine the effective factors is sparse and represented by only 10 studies in our review, with very diverse research questions.
Our review is limited by the decision to exclude crossover design studies. However, this has been discussed in detail in the potential biases in the review process section in Multimedia Appendix 7. We excluded studies published before 1991 as we consider the technology available before the World Wide Web to be materially different from that currently available. Finally, we are limited by the sparse description of the interventions in some of the papers, which occasionally might have led to misclassification of the studies.
Comparison With Prior Work
Extending the results of the meta-analysis by Cook et al [38] and in agreement with the one by Consorti et al [39], our review shows that virtual patients have an overall positive pooled effect when compared with some other types of traditional educational methods. Our observations regarding the influence of the type of outcome (knowledge/skills) and comparison (active/passive traditional learning) supplement the evidence in the previous reviews [38,39], which included studies until 2010. This time point divides the evidence collected in 2 parts: (1) time already covered by previous reviews (1991-2010) and (2) time not included in the previous reviews (2011-2018). It is interesting to note that, though the former timeframe spans over 20 years compared with 8 years in the latter, more studies were included from the latter period, 22 studies (until 2010) versus 29 studies (after 2010). This demonstrates increased interest in virtual patients and medical education in general. The research community around digital health education has long been criticized for publishing media-comparative studies [103,104]. Media-comparative research aims to make comparisons between different media formats such as paper, face-to-face, and digital education [104]. Both Friedman and Cook argue [103,104] that the limitations of this type of comparison boils down to the inability to produce an adequate control group as interventions are bound to be influenced by too many confounding factors to be generalizable. Even though there are still many media-comparative studies, the number of studies comparing different forms of digital education seem to increase: 3/22 (14%) until 2010 versus 11/29 (38%) after 2010. The number of studies in which students worked from home as an intervention has also increased; before 2011 there was just 1/16 (6%; in 6 studies it was unclear), whereas after 2011, it was 11/22 (50%; in 7 studies it was unclear) studies. However, this potentially raises concerns about how controlled the interventions and measures were, and thus the validity of the conclusions.
Our observation that virtual patient simulations predominantly effect skills rather than knowledge outcomes can be interpreted as an indication that for lower levels of Bloom's taxonomy [105], (remember, understands) there is little added value of introducing virtual patients when compared with traditional methods of education. Virtual patients can have greater impact when applied where knowledge is combined with skills and applied in problem solving, and when direct patient contact is not yet possible. We found little evidence to support the use of virtual patients at higher levels of the taxonomy. We also warn against using our result in justifying diminished hours of bedside teaching as this was investigated in just 1 study [75] and did not show positive outcomes. Consequently, virtual patients can be said to be a modality for learning in which learners actively use and train their clinical reasoning and critical thinking abilities before bedside learning, as was previously suggested in their critical literature review by Cook and Triola [22].
The perceptions of students toward studying with virtual patients are generally positive. However, some exceptions can be noted. In 1 study [77], students were less confident in their skills when compared with facilitated group discussion and lecture. This is in contrast with no observable differences or even better performance in the virtual patient group when considering the objective outcomes in those studies. This could be explained by disbelief in the effectiveness of the new computer-based methods of learning or anxiety of losing direct patient contact.
The results of our subgroup analysis, though inconsistent, encourage the introduction of more active forms of education. Yet, we note that the range from active to passive learning forms a continuum, and the decision on how to classify each intervention is hampered by sparse descriptions in the reports. Nevertheless, questioning the utility of passive learning is not a new finding and is observed elsewhere, for instance, in the literature on the flipped-classroom learning approach [106]. As the effects of comparing virtual patients with other forms of active learning were small and we could not detect any other variables explaining the heterogeneity, it seems reasonable to individually consider other factors such as cost of use, time flexibility, personnel shortage, and availability in different settings (eg, students’ homes or locations remote from academic centers) when determining which methods to use.
The need for more guidance within virtual patient simulations is apparent in studies differing by instructional methods where narrative virtual patient design was better than more autonomous problem-oriented designs [91]. Feedback given by humans at distance in a virtual patient system was better than an animated backstory in increasing empathy [95], whereas more active constructing virtual patients with more time on a task but no feedback had no more positive result on the outcomes than learning from a virtual patient scenario [100]. This reminds us that presenting realistic patient scenarios with a great degree of freedom cannot be an excuse for neglecting guidance in relation to learning objectives [107,108].
Outlook
We join the plea of Friedman [103] and Cook [104] to abandon media-comparative research as it is difficult to interpret and we instead encourage greater focus on exploring the utility of different design variants of virtual patient simulations. The current knowledge on the influence of these factors is sparse. A carefully planned study backed up in sound educational theory should provide many valuable research opportunities. However, sufficiently powered samples are needed, as the effects are likely to be small. The second consideration pertains to the need to use previously validated measurement tools that are well-aligned with the learning objectives. Comparisons of outcomes in tools on an item-by-item basis is methodologically questionable and makes the aggregations of results difficult in systematic reviews. We also call for more studies in other health professions disciplines than medicine, as our subgroup analysis showed that evidence of virtual patient effectiveness in such programs as nursing, physiotherapy, or pharmacy is underrepresented. Investigations into patient outcomes and cost-effectiveness of virtual patients are not yet explored directly and form a key avenue for future efforts.
Conclusions
Low to modest and mixed evidence suggests that when compared with traditional education, virtual patients can more effectively improve skills, and at least as effectively improve knowledge outcomes as traditional education. Education with virtual patients provides an active form of learning that is beneficial for clinical reasoning skills. Implementations vary and are likely to be broad across pre- and postregistration education, although current studies do not provide clear guidance on when to use virtual patients. We recommend further research be focused on exploring the utility of different design variants of virtual patients.
Acknowledgments
The authors wish to acknowledge the help of Jayne Alfredsson, Carina Georg, Anneliese Lilienthal, Italo Masiello, and Monika Semwal at the stage of protocol writing. The authors would like to thank Jim Campbell for his contribution to coordination and conceptual formation of the Digital Health Education Collaboration. The authors would also like to thank Carl Gornitzki, GunBrit Knutssön, and Klas Moberg from the University Library, Karolinska Institute, for developing the search strategy and Ram Chandra Bajpai from Nanyang Technological University for statistical consulting and Pawel Posadzki for his supportive feedback on the manuscript.
At the time of the study, SE was affiliated with the Karolinska Institutet and Linköping University, and N Saxena was affiliated with Health Services and Outcomes Research, National Healthcare Group, Singapore, Singapore.
Abbreviations
- 3D
3-dimensional
- cRCT
cluster randomized controlled trials
- GRADE
Grading of Recommendations Assessment, Development and Evaluation Working Group
- MCQ
multiple-choice question
- OSCE
objective structured clinical examination
- RCT
randomized controlled trial
- SMD
standardized mean difference
Glossary.
MEDLINE (Ovid) search strategy.
Summary of included studies.
Summary of technical and educational features of included studies.
Summary of excluded studies.
Subgroup analysis details.
Quality of the evidence.
Summary of finding tables.
Footnotes
Authors' Contributions: JC conceived the idea for the review. AK drafted the protocol with substantial contributions from LW, SE, NS, DD, NSA, LTC, and NZ. The Digital Health Education Collaboration developed the search strategy, obtained copies of studies, and screened the studies. JC and JCD contributed to the coordination and conceptual formation of the Digital Health Education Collaboration. AK, LW, NS, and DD extracted data from studies and conducted the risk of bias assessment. LW contacted the authors in case of missing data. AK carried out the analysis of collected data. LTC provided methodological guidance. AK, LW, SE, NS, DD, LTC, and NZ interpreted the analysis and contributed to the discussion. AK drafted the final review with substantial contributions from all authors. All authors revised and approved the final version of review.
Conflicts of Interest: None declared.
References
- 1.Ramani S, Leinster S. AMEE Guide no. 34: teaching in the clinical environment. Med Teach. 2008 Jan;30(4):347–64. doi: 10.1080/01421590802061613.792389225 [DOI] [PubMed] [Google Scholar]
- 2.Moalem J, Salzman P, Ruan DT, Cherr GS, Freiburg CB, Farkas RL, Brewster L, James TA. Should all duty hours be the same? Results of a national survey of surgical trainees. J Am Coll Surg. 2009 Jul;209(1):47–54, 54.e1-2. doi: 10.1016/j.jamcollsurg.2009.02.053.S1072-7515(09)00223-3 [DOI] [PubMed] [Google Scholar]
- 3.Dev P, Schleyer T. Computers in health care education. In: Shortliffe E, Cimino J, editors. Biomedical Informatics. London: Springer; 2014. pp. 675–93. [Google Scholar]
- 4.Frenk J, Chen L, Bhutta ZA, Cohen J, Crisp N, Evans T, Fineberg H, Garcia P, Ke Y, Kelley P, Kistnasamy B, Meleis A, Naylor D, Pablos-Mendez A, Reddy S, Scrimshaw S, Sepulveda J, Serwadda D, Zurayk H. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet. 2010 Dec 4;376(9756):1923–58. doi: 10.1016/S0140-6736(10)61854-5.S0140-6736(10)61854-5 [DOI] [PubMed] [Google Scholar]
- 5.Crisp N, Gawanas B, Sharp I, Task Force for Scaling Up EducationTraining for Health Workers Training the health workforce: scaling up, saving lives. Lancet. 2008 Feb 23;371(9613):689–91. doi: 10.1016/S0140-6736(08)60309-8.S0140-6736(08)60309-8 [DOI] [PubMed] [Google Scholar]
- 6.Car J, Carlstedt-Duke J, Tudor Car L, Posadzki P, Whiting P, Zary N, Atun R, Majeed A, Campbell J, Digital Health Education Collaboration Digital education in health professions: the need for overarching evidence synthesis. J Med Internet Res. 2019 Dec 14;21(2):e12913. doi: 10.2196/12913. http://www.jmir.org/2019/2/e12913/ v21i2e12913 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Bajpai S, Semwal M, Bajpai R, Car J, Ho AH. Health professions' digital education: review of learning theories in randomized controlled trials by the Digital Health Education Collaboration. J Med Internet Res. 2019 Mar 12;21(3):e12912. doi: 10.2196/12912. http://www.jmir.org/2019/3/e12912/ v21i3e12912 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Dunleavy G, Nikolaou CK, Nifakos S, Atun R, Law GC, Tudor Car L. Mobile digital education for health professions: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res. 2019 Feb 12;21(2):e12937. doi: 10.2196/12937. http://www.jmir.org/2019/2/e12937/ v21i2e12937 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Gentry SV, Gauthier A, L'Estrade Ehrstrom B, Wortley D, Lilienthal A, Tudor Car L, Dauwels-Okutsu S, Nikolaou CK, Zary N, Campbell J, Car J. Serious gaming and gamification education in health professions: systematic review. J Med Internet Res. 2019 Mar 28;21(3):e12994. doi: 10.2196/12994. http://www.jmir.org/2019/3/e12994/ v21i3e12994 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.George PP, Zhabenko O, Kyaw BM, Antoniou P, Posadzki P, Saxena N, Semwal M, Tudor Car L, Zary N, Lockwood C, Car J. Online digital education for postregistration training of medical doctors: systematic review by the Digital Health Education Collaboration. J Med Internet Res. 2019 Feb 25;21(2):e13269. doi: 10.2196/13269. http://www.jmir.org/2019/2/e13269/ v21i2e13269 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Huang Z, Semwal M, Lee SY, Tee M, Ong W, Tan WS, Bajpai R, Tudor Car L. Digital health professions education on diabetes management: systematic review by the Digital Health Education Collaboration. J Med Internet Res. 2019 Feb 21;21(2):e12997. doi: 10.2196/12997. http://www.jmir.org/2019/2/e12997/ v21i2e12997 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Kyaw BM, Posadzki P, Dunleavy G, Semwal M, Divakar U, Hervatis V, Tudor Car L. Offline digital education for medical students: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res. 2019 Mar 25;21(3):e13165. doi: 10.2196/13165. http://www.jmir.org/2019/3/e13165/ v21i3e13165 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Kyaw BM, Saxena N, Posadzki P, Vseteckova J, Nikolaou CK, George PP, Divakar U, Masiello I, Kononowicz AA, Zary N, Tudor Car L. Virtual reality for health professions education: systematic review and meta-analysis by the Digital Health Education collaboration. J Med Internet Res. 2019 Jan 22;21(1):e12959. doi: 10.2196/12959. http://www.jmir.org/2019/1/e12959/ v21i1e12959 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Lall P, Rees R, Law GC, Dunleavy G, Cotic Ž, Car J. Influences on the implementation of mobile learning for medical and nursing education: qualitative systematic review by the Digital Health Education Collaboration. J Med Internet Res. 2019 Feb 28;21(2):e12895. doi: 10.2196/12895. http://www.jmir.org/2019/2/e12895/ v21i2e12895 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Martinengo L, Yeo NJY, Tang ZQ, Markandran KD, Kyaw BM, Tudor Car L. Digital education for the management of chronic wounds in health care professionals: protocol for a systematic review by the Digital Health Education Collaboration. JMIR Res Protoc. 2019 Mar 25;8(3):e12488. doi: 10.2196/12488. http://www.researchprotocols.org/2019/3/e12488/ v8i3e12488 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Posadzki P, Bala MM, Kyaw BM, Semwal M, Divakar U, Koperny M, Sliwka A, Car J. Offline digital education for postregistration health professions: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res. 2019 Apr 24;21(4):e12968. doi: 10.2196/12968. http://www.jmir.org/2019/4/e12968/ v21i4e12968 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Semwal M, Whiting P, Bajpai R, Bajpai S, Kyaw BM, Tudor Car L. Digital education for health professions on smoking cessation management: systematic review by the Digital Health Education Collaboration. J Med Internet Res. 2019 Mar 4;21(3):e13000. doi: 10.2196/13000. http://www.jmir.org/2019/3/e13000/ v21i3e13000 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Tudor Car L, Kyaw BM, Dunleavy G, Smart NA, Semwal M, Rotgans JI, Low-Beer N, Campbell J. Digital problem-based learning in health professions: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res. 2019 Feb 28;21(2):e12945. doi: 10.2196/12945. http://www.jmir.org/2019/2/e12945/ v21i2e12945 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Wahabi HA, Esmaeil SA, Bahkali KH, Titi MA, Amer YS, Fayed AA, Jamal A, Zakaria N, Siddiqui AR, Semwal M, Car LT, Posadzki P, Car J. Medical doctors' offline computer-assisted digital education: systematic review by the Digital Health Education Collaboration. J Med Internet Res. 2019 Mar 1;21(3):e12998. doi: 10.2196/12998. http://www.jmir.org/2019/3/e12998/ v21i3e12998 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Ellaway R, Candler C, Greene P, Smothers V. MedBiquitous. [2019-05-10]. An Architectural Model for MedBiquitous Virtual Patients http://tinyurl.com/jpewpbt .
- 21.Kononowicz AA, Zary N, Edelbring S, Corral J, Hege I. Virtual patients - what are we talking about? A framework to classify the meanings of the term in healthcare education. BMC Med Educ. 2015;15:11. doi: 10.1186/s12909-015-0296-3. http://www.biomedcentral.com/1472-6920/15/11 .s12909-015-0296-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Cook DA, Triola MM. Virtual patients: a critical literature review and proposed next steps. Med Educ. 2009 Apr;43(4):303–11. doi: 10.1111/j.1365-2923.2008.03286.x.MED3286 [DOI] [PubMed] [Google Scholar]
- 23.Posel N, McGee JB, Fleiszer DM. Twelve tips to support the development of clinical reasoning skills using virtual patient cases. Med Teach. 2014 Dec 19;:1–6. doi: 10.3109/0142159X.2014.993951. [DOI] [PubMed] [Google Scholar]
- 24.Berman NB, Durning SJ, Fischer MR, Huwendiek S, Triola MM. The role for virtual patients in the future of medical education. Acad Med. 2016 Sep;91(9):1217–22. doi: 10.1097/ACM.0000000000001146. [DOI] [PubMed] [Google Scholar]
- 25.Kolb DA. Experiential Learning: Experience As The Source Of Learning And Development. Upper Saddle River, New Jersey: Prentice Hall; 1984. [Google Scholar]
- 26.Yardley S, Teunissen PW, Dornan T. Experiential learning: AMEE Guide No. 63. Med Teach. 2012;34(2):e102–15. doi: 10.3109/0142159X.2012.650741. [DOI] [PubMed] [Google Scholar]
- 27.Edelbring S, Dastmalchi M, Hult H, Lundberg IE, Dahlgren LO. Experiencing virtual patients in clinical learning: a phenomenological study. Adv Health Sci Educ Theory Pract. 2011 Aug;16(3):331–45. doi: 10.1007/s10459-010-9265-0. [DOI] [PubMed] [Google Scholar]
- 28.Norman G. Research in clinical reasoning: past history and current trends. Med Educ. 2005 Apr;39(4):418–27. doi: 10.1111/j.1365-2929.2005.02127.x.MED2127 [DOI] [PubMed] [Google Scholar]
- 29.Berman N, Fall LH, Smith S, Levine DA, Maloney CG, Potts M, Siegel B, Foster-Johnson L. Integration strategies for using virtual patients in clinical clerkships. Acad Med. 2009 Jul;84(7):942–9. doi: 10.1097/ACM.0b013e3181a8c668.00001888-200907000-00030 [DOI] [PubMed] [Google Scholar]
- 30.Kenny NP, Beagan BL. The patient as text: a challenge for problem-based learning. Med Educ. 2004 Oct;38(10):1071–9. doi: 10.1111/j.1365-2929.2004.01956.x.MED1956 [DOI] [PubMed] [Google Scholar]
- 31.Button D, Harrington A, Belan I. E-learning & information communication technology (ICT) in nursing education: a review of the literature. Nurse Educ Today. 2014 Oct;34(10):1311–23. doi: 10.1016/j.nedt.2013.05.002.S0260-6917(13)00165-2 [DOI] [PubMed] [Google Scholar]
- 32.Watson JA, Pecchioni LL. Digital natives and digital media in the college classroom: assignment design and impacts on student learning. Educ Media Int. 2011 Dec;48(4):307–20. doi: 10.1080/09523987.2011.632278. [DOI] [Google Scholar]
- 33.Schifferdecker KE, Berman NB, Fall LH, Fischer MR. Adoption of computer-assisted learning in medical education: the educators' perspective. Med Educ. 2012 Nov;46(11):1063–73. doi: 10.1111/j.1365-2923.2012.04350.x. [DOI] [PubMed] [Google Scholar]
- 34.Baumann-Birkbeck L, Florentina F, Karatas O, Sun J, Tang T, Thaung V, McFarland A, Bernaitis N, Khan SA, Grant G, Anoopkumar-Dukie S. Appraising the role of the virtual patient for therapeutics health education. Curr Pharm Teach Learn. 2017 Dec;9(5):934–44. doi: 10.1016/j.cptl.2017.05.012.S1877-1297(16)30164-2 [DOI] [PubMed] [Google Scholar]
- 35.Cendan J, Lok B. The use of virtual patients in medical school curricula. Adv Physiol Educ. 2012 Mar;36(1):48–53. doi: 10.1152/advan.00054.2011. http://advan.physiology.org/cgi/pmidlookup?view=long&pmid=22383412 .36/1/48 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Poulton T, Balasubramaniam C. Virtual patients: a year of change. Med Teach. 2011 Jan;33(11):933–7. doi: 10.3109/0142159X.2011.613501. [DOI] [PubMed] [Google Scholar]
- 37.Saleh N. The value of virtual patients in medical education. Ann Behav Sci Med Educ. 2010 Oct 16;16(2):29–31. doi: 10.1007/BF03355129. [DOI] [Google Scholar]
- 38.Cook DA, Erwin PJ, Triola MM. Computerized virtual patients in health professions education: a systematic review and meta-analysis. Acad Med. 2010 Oct;85(10):1589–602. doi: 10.1097/ACM.0b013e3181edfe13. [DOI] [PubMed] [Google Scholar]
- 39.Consorti F, Mancuso R, Nocioni M, Piccolo A. Efficacy of virtual patients in medical education: a meta-analysis of randomized studies. Comput Educ. 2012 Nov;59(3):1001–8. doi: 10.1016/j.compedu.2012.04.017. [DOI] [Google Scholar]
- 40.Higgins J, Green S. Cochrane Handbook For Systematic Reviews Of Interventions. Chichester, England: John Wiley & Sons; 2008. [Google Scholar]
- 41.Kononowicz AA, Woodham L, Georg C, Edelbring S, Stathakarou N, Davies D, Masiello I, Saxena N, Tudor Car L, Car J, Zary N. Virtual patient simulations for health professional education. Cochrane Database Syst Rev. 2016 May 19;(5):CD012194. doi: 10.1002/14651858.CD012194. [DOI] [Google Scholar]
- 42.Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009 Jul 21;6(7):e1000100. doi: 10.1371/journal.pmed.1000100. http://dx.plos.org/10.1371/journal.pmed.1000100 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.EndNote X7. Philadelphia, PA: Thomson Reuters; 2016. [2019-05-10]. https://endnote.com/ [Google Scholar]
- 44.Wan X, Wang W, Liu J, Tong T. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med Res Methodol. 2014 Dec 19;14:135. doi: 10.1186/1471-2288-14-135. https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-14-135 .1471-2288-14-135 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Cochrane Community. Copenhagen, Denmark: The Nordic Cochrane Centre; 2014. [2019-05-10]. Review Manager (RevMan) Version 5.3 https://community.cochrane.org/help/tools-and-software/revman-5. [Google Scholar]
- 46.Harrison S, Jones HE, Martin RM, Lewis SJ, Higgins JP. The albatross plot: a novel graphical tool for presenting results of diversely reported studies in a systematic review. Res Synth Methods. 2017 Sep;8(3):281–9. doi: 10.1002/jrsm.1239. http://europepmc.org/abstract/MED/28453179 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.R: A Language and Environment for Statistical Computing. Vienna, Austria: R Core Team; 2017. [2019-05-10]. https://www.r-project.org/ [Google Scholar]
- 48.Kurihara Y, Kuramoto S, Matsuura K, Miki Y, Oda K, Seo H, Watabe T, Qayumi AK. Academic performance and comparative effectiveness of computer- and textbook-based self-instruction. Stud Health Technol Inform. 2004;107(Pt 2):894–7.D040004039 [PubMed] [Google Scholar]
- 49.Succar T, Grigg J. A new vision for teaching ophthalmology in the medical curriculum: The Virtual Ophthalmology clinic. ASCILITE Conference; December 5-8, 2010; Sydney, Australia. 2010. pp. 944–7. [DOI] [Google Scholar]
- 50.Kononowicz AA, Krawczyk P, Cebula G, Dembkowska M, Drab E, Fraczek B, Stachoń AJ, Andres J. Effects of introducing a voluntary virtual patient module to a basic life support with an automated external defibrillator course: a randomised trial. BMC Med Educ. 2012 Jun 18;12:41. doi: 10.1186/1472-6920-12-41. https://bmcmededuc.biomedcentral.com/articles/10.1186/1472-6920-12-41 .1472-6920-12-41 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Kumta SM, Tsang PL, Hung LK, Cheng JCY. Fostering critical thinking skills through a web-based tutorial programme for final year medical students--A randomized controlled study. J Educ Multimed Hypermedia. 2003;12(3):267–73. [Google Scholar]
- 52.Succar T, Zebington G, Billson F, Byth K, Barrie S, McCluskey P, Grigg J. The impact of the Virtual Ophthalmology Clinic on medical students' learning: a randomised controlled trial. Eye (Lond) 2013 Oct;27(10):1151–7. doi: 10.1038/eye.2013.143.eye2013143 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Al-Dahir S, Bryant K, Kennedy KB, Robinson DS. Online virtual-patient cases versus traditional problem-based learning in advanced pharmacy practice experiences. Am J Pharm Educ. 2014 May 15;78(4):76. doi: 10.5688/ajpe78476. http://europepmc.org/abstract/MED/24850938 .ajpe76 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Bonnetain E, Boucheix JM, Hamet M, Freysz M. Benefits of computer screen-based simulation in learning cardiac arrest procedures. Med Educ. 2010 Jul;44(7):716–22. doi: 10.1111/j.1365-2923.2010.03708.x.MED3708 [DOI] [PubMed] [Google Scholar]
- 55.Botezatu M, Hult H, Tessma MK, Fors UGH. Virtual patient simulation for learning and assessment: superior results in comparison with regular course exams. Med Teach. 2010;32(10):845–50. doi: 10.3109/01421591003695287. [DOI] [PubMed] [Google Scholar]
- 56.Botezatu M, Hult H, Tessma MK, Fors U. Virtual patient simulation: knowledge gain or knowledge loss? Med Teach. 2010;32(7):562–8. doi: 10.3109/01421590903514630. [DOI] [PubMed] [Google Scholar]
- 57.Fleetwood J, Vaught W, Feldman D, Gracely E, Kassutto Z, Novack D. MedEthEx Online: a computer-based learning program in medical ethics and communication skills. Teach Learn Med. 2000;12(2):96–104. doi: 10.1207/S15328015TLM1202_7. [DOI] [PubMed] [Google Scholar]
- 58.Haerling KA. Cost-utility analysis of virtual and mannequin-based simulation. Simul Healthc. 2018 Feb;13(1):33–40. doi: 10.1097/SIH.0000000000000280. [DOI] [PubMed] [Google Scholar]
- 59.Jeimy S, Wang JY, Richardson L. Evaluation of virtual patient cases for teaching diagnostic and management skills in internal medicine: a mixed methods study. BMC Res Notes. 2018 Jun 5;11(1):357. doi: 10.1186/s13104-018-3463-x. https://bmcresnotes.biomedcentral.com/articles/10.1186/s13104-018-3463-x .10.1186/s13104-018-3463-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Kandasamy T, Fung K. Interactive internet-based cases for undergraduate otolaryngology education. Otolaryngol Head Neck Surg. 2009 Mar;140(3):398–402. doi: 10.1016/j.otohns.2008.11.033.S0194-5998(08)01645-8 [DOI] [PubMed] [Google Scholar]
- 61.Kinney P, Keskula DR, Perry JF. The effect of a computer assisted instructional program on physical therapy students. J Allied Health. 1997;26(2):57–61. [PubMed] [Google Scholar]
- 62.Leong SL, Baldwin CD, Adelman AM. Integrating web-based computer cases into a required clerkship: development and evaluation. Acad Med. 2003 Mar;78(3):295–301. doi: 10.1097/00001888-200303000-00012. [DOI] [PubMed] [Google Scholar]
- 63.Li J, Li QL, Li J, Chen ML, Xie HF, Li YP, Chen X. Comparison of three problem-based learning conditions (real patients, digital and paper) with lecture-based learning in a dermatology course: a prospective randomized study from China. Med Teach. 2013;35(2):e963–70. doi: 10.3109/0142159X.2012.719651. [DOI] [PubMed] [Google Scholar]
- 64.Liaw SY, Chan SW, Chen F, Hooi SC, Siau C. Comparison of virtual patient simulation with mannequin-based simulation for improving clinical performances in assessing and managing clinical deterioration: randomized controlled trial. J Med Internet Res. 2014;16(9):e214. doi: 10.2196/jmir.3322. http://www.jmir.org/2014/9/e214, v16i9e214 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Maleck M, Fischer MR, Kammer B, Zeiler C, Mangel E, Schenk F, Pfeifer KJ. Do computers teach better? A media comparison study for case-based teaching in radiology. Radiographics. 2001;21(4):1025–32. doi: 10.1148/radiographics.21.4.g01jl091025. [DOI] [PubMed] [Google Scholar]
- 66.Miedzybrodzka Z, Hamilton NM, Gregory H, Milner B, Frade I, Sinclair T, Mollison J, Haites N. Teaching undergraduates about familial breast cancer: comparison of a computer assisted learning (CAL) package with a traditional tutorial approach. Eur J Hum Genet. 2001 Dec;9(12):953–6. doi: 10.1038/sj.ejhg.5200751. doi: 10.1038/sj.ejhg.5200751. [DOI] [PubMed] [Google Scholar]
- 67.Qayumi AK, Kurihara Y, Imai M, Pachev G, Seo H, Hoshino Y, Cheifetz R, Matsuura K, Momoi M, Saleem M, Lara-Guerra H, Miki Y, Kariya Y. Comparison of computer-assisted instruction (CAI) versus traditional textbook methods for training in abdominal examination (Japanese experience) Med Educ. 2004 Oct;38(10):1080–8. doi: 10.1111/j.1365-2929.2004.01957.x.MED1957 [DOI] [PubMed] [Google Scholar]
- 68.Schwid HA, Rooke GA, Ross BK, Sivarajan M. Use of a computerized advanced cardiac life support simulator improves retention of advanced cardiac life support guidelines better than a textbook review. Crit Care Med. 1999 Apr;27(4):821–4. doi: 10.1097/00003246-199904000-00045. [DOI] [PubMed] [Google Scholar]
- 69.Schwid HA, Rooke GA, Michalowski P, Ross BK. Screen-based anesthesia simulation with debriefing improves performance in a mannequin-based anesthesia simulator. Teach Learn Med. 2001;13(2):92–6. doi: 10.1207/S15328015TLM1302_4. [DOI] [PubMed] [Google Scholar]
- 70.Secomb J, McKenna L, Smith C. The effectiveness of simulation activities on the cognitive abilities of undergraduate third-year nursing students: a randomised control trial. J Clin Nurs. 2012 Dec;21(23-24):3475–84. doi: 10.1111/j.1365-2702.2012.04257.x. [DOI] [PubMed] [Google Scholar]
- 71.Sobocan M, Turk N, Dinevski D, Hojs R, Balon BP. Problem-based learning in internal medicine: virtual patients or paper-based problems? Intern Med J. 2017 Jan;47(1):99–103. doi: 10.1111/imj.13304. [DOI] [PubMed] [Google Scholar]
- 72.Subramanian A, Timberlake M, Mittakanti H, Lara M, Brandt ML. Novel educational approach for medical students: improved retention rates using interactive medical software compared with traditional lecture-based format. J Surg Educ. 2012;69(2):253–6. doi: 10.1016/j.jsurg.2011.12.007.S1931-7204(11)00358-8 [DOI] [PubMed] [Google Scholar]
- 73.Tao H. Computer-based simulative training system—a new approach to teaching pre-hospital trauma care. J Med Coll PLA. 2011 Dec;26(6):335–44. doi: 10.1016/S1000-1948(12)60029-X. [DOI] [Google Scholar]
- 74.Triola M, Feldman H, Kalet AL, Zabar S, Kachur EK, Gillespie C, Anderson M, Griesser C, Lipkin M. A randomized trial of teaching clinical skills using virtual and live standardized patients. J Gen Intern Med. 2006 May;21(5):424–9. doi: 10.1111/j.1525-1497.2006.00421.x. https://onlinelibrary.wiley.com/resolve/openurl?genre=article&sid=nlm:pubmed&issn=0884-8734&date=2006&volume=21&issue=5&spage=424 .JGI421 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Vash JH, Yunesian M, Shariati M, Keshvari A, Harirchi I. Virtual patients in undergraduate surgery education: a randomized controlled study. ANZ J Surg. 2007;77(1-2):54–9. doi: 10.1111/j.1445-2197.2006.03978.x.ANS3978 [DOI] [PubMed] [Google Scholar]
- 76.Wang CL, Chinnugounder S, Hippe DS, Zaidi S, O'Malley RB, Bhargava P, Bush WH. Comparative effectiveness of hands-on versus computer simulation-based training for contrast media reactions and teamwork skills. J Am Coll Radiol. 2017 Jan;14(1):103–10.e3. doi: 10.1016/j.jacr.2016.07.013.S1546-1440(16)30606-8 [DOI] [PubMed] [Google Scholar]
- 77.Williams C, Aubin S, Harkin P, Cottrell D. A randomized, controlled, single-blind trial of teaching provided by a computer-based multimedia package versus lecture. Med Educ. 2001 Sep;35(9):847–54. doi: 10.1046/j.1365-2923.2001.00960.x.med960 [DOI] [PubMed] [Google Scholar]
- 78.Bryant R, Miller CL, Henderson D. Virtual clinical simulations in an online advanced health appraisal course. Clin Simul Nurs. 2015 Oct;11(10):437–444. doi: 10.1016/j.ecns.2015.08.002. [DOI] [Google Scholar]
- 79.Deladisma AM, Gupta M, Kotranza A, Bittner JG, Imam T, Swinson D, Gucwa A, Nesbit R, Lok B, Pugh C, Lind DS. A pilot study to integrate an immersive virtual patient with a breast complaint and breast examination simulator into a surgery clerkship. Am J Surg. 2009 Jan;197(1):102–6. doi: 10.1016/j.amjsurg.2008.08.012.S0002-9610(08)00700-9 [DOI] [PubMed] [Google Scholar]
- 80.Gu Y, Zou Z, Chen X. The Effects of vSIM for Nursing™ as a teaching strategy on fundamentals of nursing education in undergraduates. Clin Simul Nurs. 2017 Apr;13(4):194–7. doi: 10.1016/j.ecns.2017.01.005. [DOI] [Google Scholar]
- 81.Kaltman S, Talisman N, Pennestri S, Syverson E, Arthur P, Vovides Y. Using technology to enhance teaching of patient-centered interviewing for early medical students. Simul Healthc. 2018 Jun;13(3):188–94. doi: 10.1097/SIH.0000000000000304. [DOI] [PubMed] [Google Scholar]
- 82.Lehmann R, Thiessen C, Frick B, Bosse HM, Nikendei C, Hoffmann GF, Tönshoff B, Huwendiek S. Improving pediatric basic life support performance through blended learning with web-based virtual patients: randomized controlled trial. J Med Internet Res. 2015 Jul 2;17(7):e162. doi: 10.2196/jmir.4141. http://www.jmir.org/2015/7/e162/ v17i7e162 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83.Schittek Janda M, Mattheos N, Nattestad A, Wagner A, Nebel D, Färbom C, Lê D, Attström R. Simulation of patient encounters using a virtual patient in periodontology instruction of dental students: design, usability, and learning effect in history-taking skills. Eur J Dent Educ. 2004 Aug;8(3):111–9. doi: 10.1111/j.1600-0579.2004.00339.x.EJE339 [DOI] [PubMed] [Google Scholar]
- 84.Smith BD, Silk K. Cultural competence clinic: an online, interactive, simulation for working effectively with Arab American Muslim patients. Acad Psychiatry. 2011;35(5):312–6. doi: 10.1176/appi.ap.35.5.312.35/5/312 [DOI] [PubMed] [Google Scholar]
- 85.Wahlgren CF, Edelbring S, Fors U, Hindbeck H, Ståhle M. Evaluation of an interactive case simulation system in dermatology and venereology for medical students. BMC Med Educ. 2006;6:40. doi: 10.1186/1472-6920-6-40. http://www.biomedcentral.com/1472-6920/6/40 .1472-6920-6-40 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Weverling GJ, Stam J, ten Cate TJ, van Crevel H. [Computer-assisted education in problem-solving in neurology; a randomized educational study] Ned Tijdschr Geneeskd. 1996 Feb 24;140(8):440–3. [PubMed] [Google Scholar]
- 87.Courteille O, Fahlstedt M, Ho J, Hedman L, Fors U, von Holst H, Felländer-Tsai L, Möller H. Learning through a virtual patient vs recorded lecture: a comparison of knowledge retention in a trauma case. Int J Med Educ. 2018 Mar 28;9:86–92. doi: 10.5116/ijme.5aa3.ccf2. https://www.ijme.net/pmid/29599421 .ijme.9.8692 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Dankbaar ME, Alsma J, Jansen EE, van Merrienboer JJ, van Saase JL, Schuit SC. An experimental study on the effects of a simulation game on students' clinical cognitive skills and motivation. Adv Health Sci Educ Theory Pract. 2016 Aug;21(3):505–21. doi: 10.1007/s10459-015-9641-x.10.1007/s10459-015-9641-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Foster A, Chaudhary N, Murphy J, Lok B, Waller J, Buckley PF. The use of simulation to teach suicide risk assessment to health profession trainees—rationale, methodology, and a proof of concept demonstration with a virtual patient. Acad Psychiatry. 2015 Dec;39(6):620–9. doi: 10.1007/s40596-014-0185-9.10.1007/s40596-014-0185-9 [DOI] [PubMed] [Google Scholar]
- 90.Trudeau KJ, Hildebrand C, Garg P, Chiauzzi E, Zacharoff KL. A randomized controlled trial of the effects of online pain management education on primary care providers. Pain Med. 2017 Dec 1;18(4):680–92. doi: 10.1093/pm/pnw271.pnw271 [DOI] [PubMed] [Google Scholar]
- 91.Bearman M, Cesnik B, Liddell M. Random comparison of 'virtual patient' models in the context of teaching clinical communication skills. Med Educ. 2001 Sep;35(9):824–32. doi: 10.1046/j.1365-2923.2001.00999.x. [DOI] [PubMed] [Google Scholar]
- 92.Berger J, Bawab N, de Mooij J, Widmer DS, Szilas N, de Vriese C, Bugnon O. An open randomized controlled study comparing an online text-based scenario and a serious game by Belgian and Swiss pharmacy students. Curr Pharm Teach Learn. 2018 Dec;10(3):267–76. doi: 10.1016/j.cptl.2017.11.002.S1877-1297(16)30383-5 [DOI] [PubMed] [Google Scholar]
- 93.Braun LT, Zottmann JM, Adolf C, Lottspeich C, Then C, Wirth S, Fischer MR, Schmidmaier R. Representation scaffolds improve diagnostic efficiency in medical students. Med Educ. 2017 Nov;51(11):1118–26. doi: 10.1111/medu.13355. [DOI] [PubMed] [Google Scholar]
- 94.Davids MR, Chikte UM, Halperin ML. Effect of improving the usability of an e-learning resource: a randomized trial. Adv Physiol Educ. 2014 Jun;38(2):155–60. doi: 10.1152/advan.00119.2013. http://www.physiology.org/doi/full/10.1152/advan.00119.2013?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed .38/2/155 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95.Foster A, Chaudhary N, Kim T, Waller JL, Wong J, Borish M, Cordar A, Lok B, Buckley PF. Using virtual patients to teach empathy: a randomized controlled study to enhance medical students' empathic communication. Simul Healthc. 2016 Jun;11(3):181–9. doi: 10.1097/SIH.0000000000000142. [DOI] [PubMed] [Google Scholar]
- 96.Friedman CP, France CL, Drossman DD. A randomized comparison of alternative formats for clinical simulations. Med Decis Making. 1991;11(4):265–72. doi: 10.1177/0272989X9101100404. [DOI] [PubMed] [Google Scholar]
- 97.Harris JM, Sun H. A randomized trial of two e-learning strategies for teaching substance abuse management skills to physicians. Acad Med. 2013 Sep;88(9):1357–62. doi: 10.1097/ACM.0b013e31829e7ec6. http://europepmc.org/abstract/MED/23887001 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Mahnken AH, Baumann M, Meister M, Schmitt V, Fischer MR. Blended learning in radiology: is self-determined learning really more effective? Eur J Radiol. 2011 Jun;78(3):384–7. doi: 10.1016/j.ejrad.2010.12.059.S0720-048X(11)00008-8 [DOI] [PubMed] [Google Scholar]
- 99.Maier EM, Hege I, Muntau AC, Huber J, Fischer MR. What are effects of a spaced activation of virtual patients in a pediatric course? BMC Med Educ. 2013 Mar 28;13:45. doi: 10.1186/1472-6920-13-45. https://bmcmededuc.biomedcentral.com/articles/10.1186/1472-6920-13-45 .1472-6920-13-45 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Tolsgaard MG, Jepsen RMHG, Rasmussen MB, Kayser L, Fors U, Laursen LC, Svendsen JH, Ringsted C. The effect of constructing versus solving virtual patient cases on transfer of learning: a randomized trial. Perspect Med Educ. 2016 Feb;5(1):33–8. doi: 10.1007/s40037-015-0242-4. http://europepmc.org/abstract/MED/26754313 .10.1007/s40037-015-0242-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Kraiger K, Ford JK, Salas E. Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. J Appl Psychol. 1993;78(2):311–28. doi: 10.1037/0021-9010.78.2.311. [DOI] [Google Scholar]
- 102.Stevens A, Hernandez J, Johnsen K, Dickerson R, Raij A, Harrison C, DiPietro M, Allen B, Ferdig R, Foti S, Jackson J, Shin M, Cendan J, Watson R, Duerson M, Lok B, Cohen M, Wagner P, Lind DS. The use of virtual patients to teach medical students history taking and communication skills. Am J Surg. 2006 Jun;191(6):806–11. doi: 10.1016/j.amjsurg.2006.03.002.S0002-9610(06)00204-2 [DOI] [PubMed] [Google Scholar]
- 103.Friedman CP. The research we should be doing. Acad Med. 1994 Jun;69(6):455–7. doi: 10.1097/00001888-199406000-00005. [DOI] [PubMed] [Google Scholar]
- 104.Cook DA. The research we still are not doing: an agenda for the study of computer-based learning. Acad Med. 2005 Jun;80(6):541–8. doi: 10.1097/00001888-200506000-00005.80/6/541 [DOI] [PubMed] [Google Scholar]
- 105.Krathwohl DR. A revision of Bloom's taxonomy: an overview. Theory Pract. 2002 Nov;41(4):212–8. doi: 10.1207/s15430421tip4104_2. [DOI] [Google Scholar]
- 106.Chen F, Lui AM, Martinelli SM. A systematic review of the effectiveness of flipped classrooms in medical education. Med Educ. 2017 Jun;51(6):585–97. doi: 10.1111/medu.13272. [DOI] [PubMed] [Google Scholar]
- 107.Kirschner PA, Sweller J, Clark RE. Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educ Psychol. 2006 Jun;41(2):75–86. doi: 10.1207/s15326985ep4102_1. [DOI] [Google Scholar]
- 108.Edelbring S, Wahlström R. Dynamics of study strategies and teacher regulation in virtual patient learning activities: a cross sectional survey. BMC Med Educ. 2016 Apr 23;16:122. doi: 10.1186/s12909-016-0644-y. https://bmcmededuc.biomedcentral.com/articles/10.1186/s12909-016-0644-y .10.1186/s12909-016-0644-y [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Glossary.
MEDLINE (Ovid) search strategy.
Summary of included studies.
Summary of technical and educational features of included studies.
Summary of excluded studies.
Subgroup analysis details.
Quality of the evidence.
Summary of finding tables.