Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2021 May 8;37(4):683–689. doi: 10.1016/j.profnurs.2021.05.005

Exploring student perceptions of virtual simulation versus traditional clinical and manikin-based simulation

Donna Badowski a,, Kelly L Rossler b, Nanci Reiland c
PMCID: PMC9767435  PMID: 34187664

Abstract

Background

The COVID-19 pandemic immediately changed the way nursing programs provide clinical experiences for pre-licensure nursing programs. Healthcare organizations closed access to clinical experiences for nursing students and universities immediately shifted to remote learning and online virtual simulation.

Purpose

This research examined students' perceptions of virtual simulation in meeting their learning needs when compared to traditional clinical experiences and manikin-based simulation environments.

Methods

A retrospective multi-site exploratory, descriptive design had 97 participants complete the Clinical Learning Environment Comparison Survey 2.0 after having experienced virtual simulation. A Kruskal-Wallis test was used to examine differences among participants when grouped by degree program and level/term within the nursing program.

Results

Traditional clinical experiences met students' perceived learning needs for all degree programs of study for subscale items of communication, nursing process, holism, critical thinking, and self-efficacy. When grouped by level/term, traditional clinical experiences met all students' perceived learning needs for every subscale item. Manikin-based simulation met students' perceived learning needs for subscale items of critical thinking and teaching-learning dyad while virtual simulation met perceived learning needs for subscale items of nursing process, critical thinking, self-efficacy, and teaching-learning dyad.

Conclusion

While traditional clinical learning experiences remains the “gold standard”, manikin-based and virtual simulation do meet specific important learning needs.

Keywords: Virtual simulation, Simulation-based education, Clinical, Alternate clinical learning experiences

Introduction

The COVID-19 pandemic caused a major shift in clinical learning experiences for pre-licensure nursing programs. The number of COVID-19 positive cases increased hospitalizations and deaths grew exponentially in the beginning. By August of 2020, the U.S. COVID-related hospitalization rate was near 150 per 100,000 [Centers for Disease Control (CDC), 2020]. Globally, governments began implementing lockdown measures of its citizens to flatten the curve to prevent overwhelming healthcare systems. In response, colleges and universities had to immediately shift to remote learning (Dewart, Corcoran, Thirsk, & Petrovic, 2020; Leigh et al., 2020). Healthcare organizations closed access to clinical practice experiences for nursing students. The disruption in the healthcare system led nursing programs to shift to simulation-based learning experiences as a substitute for traditional clinical. However, many colleges and universities closed due to the state lockdowns to prevent the virus's spread. This resulted in closed access to the simulation labs. Both of these losses resulted in nursing programs quickly shifting to online virtual simulation (VS).

Since the National Council of State Boards of Nursing (NCSBN) study, many State Boards of Nursing have made policies on the percent of clinical time that can be substituted with simulation (Smiley, 2019). Policy decisions were informed by the study results demonstrating no differences in clinical competency, comprehensive nursing knowledge, or NCLEX pass rates, when up to 50% of traditional clinical was substituted with high-quality simulation (Hayden, Smiley, Alexander, Kardong-Edgren, & Jeffries, 2014). The simulation-based experiences (SBEs) used as a substitute for traditional clinical included medium- or high-fidelity manikins, standardized patients, role-playing, skills stations, and computer-based critical thinking simulations. These SBEs align with the International Nursing Association for Clinical Simulation and Learning (INACSL) Standards Committee's (2016a) definition for simulation; “structured activities that represent actual or potential situations in practice to improve knowledge, skills and/or attitudes by providing learners with opportunities to analyze and respond to realistic conditions” (p.44).

A key data collection instrument used in the NCSBN study was the Clinical Learning Environment Comparison Survey (CLECS). It compares how well student learning needs are met between the traditional clinical and the simulation learning environments (Hayden et al., 2014; Leighton, 2015). Data from the CLECS showed how study participants receiving between 10% or up to 25% of clinical education with simulation rated the traditional clinical environment as meeting their learning needs. Participants that had up to 50% of their clinical education substituted with simulation rated this environment as meeting their learning needs. The CLECS authors modified the instrument to incorporate screen-based learning environments post-COVID-19 (Leighton, 2021). This new version of the instrument, CLECS 2.0, provides the opportunity to expand on existing knowledge of how virtual and/or screen-based simulation platforms meet pre-licensure nursing students' clinical learning needs (Leighton, 2021). For this paper, the three different learning environment variables were identified as screen-based simulation platforms, manikin-based simulation, and traditional clinical experiences. VS was the term used for screen-based simulation platforms. This research sought to understand students' perceptions of VS in meeting their learning needs when compared to manikin-based simulation and traditional clinical experiences. The research questions were:

  • 1.

    Are there differences among pre-licensure nursing students when grouped by degree program of study (traditional BSN, fast-track BSN, and Master Entry) in perceptions of clinical learning needs being met in VS, manikin-based simulation, or traditional clinical learning environments?

  • 2.

    Are there differences among pre-licensure nursing students when grouped by the level/term where they were placed within their nursing program (Junior I, Junior II, Senior I, and Senior II) in perceptions of clinical learning needs being met in VS, manikin-based simulation, or traditional clinical learning environments?

Theoretical framework

The NLN Jeffries Simulation Theory has seven constructs (Jeffries, Rodgers, & Adamson, 2016). These constructs include context, background, design, simulation experience, facilitator, educational strategies, and outcomes. There is a dynamic interaction between the constructs during the simulation experience contributing to the simulation outcomes. According to Jeffries et al. (2016), outcomes can be focused on the participant, patient and system. Participant outcomes include reaction, learning, and behavior changes. Reaction incorporates satisfaction and self-confidence in learning. This study compared participant reactions to the diverse educational strategies (traditional, manikin-based simulation and VS).

Literature review

Virtual simulation is like manikin-based simulation, whereby students are given opportunities to respond to realistic situations to develop knowledge, skills, and attitudes. However, VS is offered on a computer screen with graphical images and text whereby the student interacts in the SBE using a keyboard and mouse (Lioce et al., 2020). Learners can complete specific tasks in various potential environments, use information to provide assessment and care, make clinical decisions, and observe the results in action (INACSL Standards Committee, 2016a). Benefits of VS include cost savings to nursing programs (Foronda et al., 2020) and deliberate practice of cognitive, psychomotor, and affective skills (Bayram & Caliskan, 2019; Duff, Miller, and Bruce (2016); Mabry et al., 2020).

There have been several studies examining a variety of learner outcomes following VS. The Kirkpatrick Model has identified four levels of evaluation for training programs (Kirkpatrick Partners, 2020). Those levels include reaction, learning, behavior, and results. For the purposes of this paper, focus is placed on the first two levels of evaluation. The first level measures learner reaction, or the degree to which the learners find the training engaging, favorable, or relevant. Learning focuses on gained knowledge, skills, and the attitudes because of training. The levels of behavior and results focus on how learners apply the learning to impact patient care outcomes. Specific to the reaction level of evaluation, several VS studies reported students are satisfied with VS's learning experience (Cobbett & Snelgrove-Clarke, 2016; Duff et al., 2016; Foronda, Fernandez-Burgos, Nadeau, Kelley, & Henry, 2020; Padilha, Machado, Ribeiro, & Ramos, 2018; Powers, 2020).

The second level of evaluation is learning. This phase includes learner gains in knowledge, skills, attitude, or confidence (Kirkpatrick Partners, 2020). Bayram and Caliskan (2019) found students significantly improved their tracheal suctioning and peristomal skin care skills after playing a tracheostomy game-based phone app. Additionally, Foronda et al. (2020) found improved skill performance and gains in cognitive and affective domains and critical thinking. A scoping review of VS encounters by Duff et al. (2016) showed it could support diagnostic reasoning either as a stand-alone strategy or combined with manikin-based simulation. Finally, many studies found students had higher levels of self-confidence with problem-solving skills, using the nursing process, managing a deteriorating patient, and practice in the traditional clinical setting following VS (Bonito, 2019; Cobbett & Snelgrove-Clarke, 2016; Foronda et al., 2020; Mabry et al., 2020; Powers, 2020; Sapiano, Sammut, & Trapani, 2018). Kirkpatrick's Model (Kirkpatrick Partners (2020) and the NLN Jeffries Simulation Theory (Jeffries et al., 2016) support the purpose of this research study.

A few studies compared student perceptions of learning needs between manikin-based simulation and VS. Powers (2020) found students had higher levels of satisfaction and self-confidence following an in-class unfolding VS when compared to earlier high-fidelity simulations they had completed. Cobbett and Snelgrove-Clarke (2016) compared on-campus VS to manikin-based simulation for third-year undergraduate nursing students. They found no significant differences in knowledge or self-confidence between groups. However, the VS group reported significantly higher levels of anxiety.

Virtual simulation has positively impacted student learning, similar to the findings seen in manikin-based simulation. The COVID-19 lockdowns put an immediate halt to both traditional clinical and manikin-based simulation experiences. Nursing students were thrust into the VS world for 100% of their clinical learning experiences. Three faculty set out to compare student perceptions of meeting their learning needs when VS was used as a substitute for traditional clinical and manikin-based simulation experiences.

Methods

Study design and sample

A retrospective multi-site exploratory, descriptive study was used. A non-probability convenience sample was used to recruit prelicensure nursing students enrolled in courses at three different academic institutions. Study sites, identified as A (DePaul University), B (Baylor University), and C (Lewis University), represent both public and private institutions located within the Midwestern and Southwestern United States. Study site A is a master's degree entry to nursing practice (MENP) program. Study sites B and C included both traditional and fast-track baccalaureate programs (TBSN and FTBSN). Inclusion criteria included a) prelicensure nursing students b) had to have practice experiences in all three learning environments, and c) junior or senior-level students. Study site A is on a quarter system, and study sites B and C use a semester system. Each quarter/semester's curricular content was noted as similar within each institution and delivered over two years. The curricular content for the programs included categories of a) health assessment and fundamentals of nursing, b) medical-surgical and psychiatric nursing, c) obstetric and pediatric nursing, d) community health, and e) transition into practice/immersion courses. Learning environments were defined as a) the VS occurred on a computer screen, b) the manikin-based simulation occurred in a designated area of a simulation or skills laboratory and utilized a patient simulator, task trainer, or standardized patient, and c) the traditional clinical environment included a faculty-supervised experience that occurred in a healthcare setting. These settings included hospital, outpatient clinic, long-term care facility, community organization, or patient home.

Data collection instruments and procedures

The demographic survey was investigator-developed. It included questions related to the type of program of study, the student level within the program, participant age, and overall comfort level with technology.

The CLECS 2.0 evaluates the learner's perception of how well their learning needs were met in traditional clinical, manikin-based simulation, and VS environments (Leighton, 2021). With COVID-19 causing a shift to VS as an alternate to traditional clinical, the original CLEC Survey (Leighton, 2015; Leighton, 2018) was updated to include a category section for the VS environment, referred to as screen-based simulation. CLECS 2.0 remains a self-reported 27-item survey (Leighton, 2018; Leighton, 2021). Student responses are based on a Likert scale of 1 (not met) to 4 (well met) and a “not applicable” choice if the statement does not apply to any personal experience. For this study, the “not applicable” choice was labeled as “not available.” The tool has six subscales; self-efficacy, teaching-learning dyad, holism, communication, nursing process, and critical thinking. For each subscale, higher scores indicate that the clinical needs were met in the identified learning environment. Post factor analysis Cronbach's alpha for the subscale items for the original CLEC Survey demonstrated a range from 0.826 to 0.913 for manikin-based simulation and 0.741 to 0.898 for the traditional clinical environment (Leighton, 2015; Leighton, 2018). At present, reliability data for the CLECS 2.0 has not been established (Leighton, 2021). The average administration time is 10 min. The CLECS 2.0 was administered via QualtricsXM. Permission to use the CLECS 2.0 was obtained (Evaluating Healthcare Simulation, 2020).

This study was approved as exempt by the Institutional Review Board (IRB) of study site A with IRB Authorization Agreements completed for study sites B and C. A recruitment email was sent to participants who provided information about the study. A link to the anonymous online QualtricsXM survey with a secured server was provided. The potential participants were provided information in the informed consent to note that a) the survey was voluntary, b) no negative consequences would occur if a participant changed their mind after starting the study, c) participants could skip an answer if wanted to do so, d) withdrawal from the study could occur at any point in time and without any impact on student grades or status in the nursing program, and completion indicated consent. The informed consent process also reinforced how this study was anonymous and the researchers would now know which survey response corresponded with the participant. Internet Protocol addresses were not collected to maintain participant anonymity. As a result, once responses were submitted, participant responses could not be removed if they exited the survey early and requested responses be removed. Individual survey links locked once a participant submitted the survey to prevent more than one response per participant. The survey was open between May through July 2020. Reminder emails were sent each week.

Data analysis and results

A total of 762 recruitment emails were sent among all three study sites, with a 13% response rate. A total of 97 participants completed the CLECS 2.0 between study sites A (n = 47), B (n = 28), and C (n = 22). Table 1 summarizes the demographic characteristics of participants. Study site A uses a quarter-based term, whereas study sites B and C use a semester-based term. The researchers mapped out the programmatic content for each term among all three study sites and noticed similarities in both the quarter and semester-based systems. For data analysis purposes, the researchers collapsed the quarter-based system participants into program-level junior or senior categories in the semester-based system (Ex. Quarter I and Quarter II students were allocated to the Junior I level).

Table 1.

Sample demographic characteristics (n = 97).

Characteristic n %
Degree program of studya
Traditional BSN 41 42.3
Accelerated or fast track BSN 9 9.3
Master entry to nursing practice 45 46.4



Level in program of studya
Junior I 10 10.3
Junior II 37 38.1
Senior I 32 33.0
Senior II 16 16.5



Agea
18-27 72 74.4
28–37 18 18.6
38–47 4 4.1
48 or above 1 1.0



Comfort with technologya
Somewhat uncomfortable 1 1.0
Neutral 8 8.2
Somewhat comfortable 30 30.9
Very comfortable 56 57.7
a

Missing values from 2 participants.

Data were examined for missing values, and normality testing was conducted. Nominal and ordinal data were examined with frequency and percentages. Interval/ratio data were analyzed with measures of central tendency. The subscale scores for the CLECS 2.0 demonstrated non-normal distribution. Thus, non-parametric statistical analyses were conducted. The internal consistency reliability of the CLECS 2.0 was examined. Cronbach's alpha was acceptable for all CLECS 2.0 subscale items for the traditional and manikin-based simulation environments. The Cronbach's alpha values for each subscale item for VS was acceptable in all subscale items except for communication (0.668) (Table 2 ). To determine statistical significance, an alpha level of <0.05 was set for data analysis.

Table 2.

Internal Reliability with Cronbach's Alpha values for the CLECS 2.0 subscale items by the learning environment.

Subscale item Traditional clinical environment Manikin-based simulation clinical environment Screen-based clinical environment
Communication 0.972 0.899 0.668
Nursing process 0.980 0.941 0.867
Holism 0.941 0.842 0.771
Critical thinking 0.968 0.919 0.774
Self-efficacy 0.942 0.876 0.719
Teaching-learning dyad 0.959 0.861 0.787
Total for subscale 0.970 0.946 0.901

Research Question 1

A Kruskal-Wallis test was used to test for differences among the three-degree programs of study on student perceptions of clinical needs being met in VS, manikin-based simulation, and traditional clinical environments. Table 3 summarizes these differences. Findings show statistical significance for traditional clinical meeting students learning needs when grouped by degree program of study on five of the six CLECS 2.0 subscale items. These items include communication, nursing process, holism, critical thinking, and self-efficacy. There was no statistical significance in any subscale items for manikin-based or VS when grouped by the degree program of study. A Mann-Whitney U was conducted to determine which degree program group or groups demonstrated significant differences. The MENP group showed that clinical learning needs were better met in the traditional clinical environment in all subscale items except the teaching-learning dyad (communication: Md = 8.0, p = .001, z = −3.68; nursing process: Md = 12.0, p = .012, z = −2.52; holism: Md = 16.0, p = .008, z = −2.66; self-efficacy: Md = 9.0, p = .012, Z = −2.50) as compared to the TBSN group (communication: Md = 4.0; nursing process: Md = 8.0; holism: Md = 10.0; self-efficacy: Md = 6.0). No other group differences were noted as significant. For the Mann-Whitney U analysis, a Bonferroni adjustment was calculated with a new alpha level of less than 0.017 recognized as a significance level. Refer to Table 2 to view the Cronbach's alpha values.

Table 3.

Comparisons of the nursing student groups by degree program of study and learning environment.

CLECS 2.0 subscales by environment TBSNa
(n = 40)
FTBSNa
(n = 8)
MENPa
(n = 43)
Test statistic
p-Value⁎⁎
Mean rank Mean rank Mean rank X2
Traditional clinical
Communication 34.94 45.50 56.38 14.3 0.001⁎⁎
Nurse process 40.14 33.00 54.90 9.11 0.010⁎⁎
Holism 38.28 37.78 53.86 8.23 0.016⁎⁎
Critical thinking 40.64 38.17 54.59 7.18 0.028⁎⁎
Self-efficacy 39.24 39.06 53.93 7.19 0.027⁎⁎
Teaching-learning dyad 44.08 34.44 52.23 4.33 0.115



CLECS 2.0 subscales by environment TBSN
(n = 37)
FTBSN
(n = 8)
MENP
(n = 40)
Test statistic
p-Value
Mean rank Mean rank Mean rank X2
Manikin-based simulation
Communication 40.39 46.31 44.75 0.77 0.682
Nurse process 42.35 39.19 46.37 0.82 0.662
Holism 39.01 45.56 47.19 2.14 0.640
Critical thinking 41.64 39.39 48.12 1.75 0.420
Self-efficacy 40.46 36.83 48.77 2.94 0.230
Teaching-learning dyad 43.92 38.11 46.38 0.83 0.661



Screen-based simulation
Communication 44.11 56.63 39.25 3.51 0.173
Nurse process 42.76 46.56 42.41 0.21 0.899
Holism 40.73 47.78 45.10 0.89 0.640
Critical thinking 45.35 51.94 42.15 1.91 0.551
Self-efficacy 47.76 42.89 41.98 1.060 0.589
Teaching-learning dyad 40.49 39.33 48.01 2.08 0.354
a

Traditional BSN Program (TBSN), Fast Track BSN (FTBSN), Master Entry Nursing Program (MENP).

⁎⁎

p value set at <0.05 for level of significance.

Research Question 2

A Kruskal-Wallis test was used to test for differences among the four student groups by their level/term within the program (Junior I, Junior II, Senior I, or Senior II) on perceptions of clinical needs being met. Table 4 summarizes these comparisons. Findings show statistical significance for traditional clinical meeting student learning needs when grouped by level/term within the program in all subscale items. Subscale items demonstrating significance for the manikin-based simulation included critical thinking and the teaching-learning dyad. Subscale items demonstrating significance for the VS included nursing process, critical thinking, self-efficacy, and the teaching-learning dyad.

Table 4.

Comparisons of the nursing student groups by level/term within the program and learning environment.

CLECS 2.0 subscales by environment Junior I
(n = 10)
Junior II
(n = 36)
Senior I
(n = 29)
Senior II
(n = 16)
Test statistic
p-Value
Mean rank Mean rank Mean rank Mean rank X2
Traditional clinical
Communication 71.55 44.83 38.28 46.66 12.43 0.006
Nurse process 69.80 45.40 39.29 18.31 10.23 0.016
Holism 72.40 43.63 39.45 44.03 12.70 0.005
Critical thinking 75.60 47.60 39.82 41.69 15.15 0.002
Self-efficacy 70.10 45.77 40.80 41.19 10.25 0.017
Teaching-learning dyad 74.45 45.81 42.52 41.22 12.77 0.005



CLECS 2.0 subscales by environment Junior I
(n = 9)
Junior II
(n = 35)
Senior I
(n = 26)
Senior II
(n = 15)
Test statistic
p value
Mean rank Mean rank Mean rank Mean rank X2
Manikin-based simulation
Communication 56.11 46.43 37.17 42.33 6.61 0.085
Nurse process 57.39 45.84 34.98 47.66 6.60 0.087
Holism 57.00 44.78 38.35 41.88 3.96 0.26
Critical thinking 63.89 48.00 32.79 46.44 12.41 0.006
Self-efficacy 51.33 48.53 34.43 46.90 6.16 0.104
Teaching-learning dyad 50.89 48.96 32.27 52.56 9.88 0.020



Screen-based simulation
Communication 51.67 47.11 37.52 37.70 4.14 0.25
Nurse process 52.11 48.04 32.36 46.18 8.13 0.043
Holism 52.94 47.01 36.16 43.57 4.43 0.22
Critical thinking 63.78 49.74 33.13 42.09 12.63 0.006
Self-efficacy 58.33 51.44 32.59 42.38 11.56 0.009
Teaching-learning dyad 56.61 49.29 31.13 48.19 11.52 0.009

p-Value set at <0.05 for the level of significance.

A Mann-Whitney U was conducted to determine which specific level/term within the program group or groups demonstrated significant differences. Table 5 provides specific comparison data for participants in the Junior I, Junior II, Senior I, and Senior II groups. Overall, findings show the Junior I group perceived clinical learning needs were better met in the traditional clinical environment in four of six the subscale items when compared to the Junior II group. The Junior I group also perceived that clinical learning needs were better met in the manikin-based and VS for subscale items of critical thinking and self-efficacy when compared to the Senior I group. Junior I students also perceived clinical learning needs were better met in the traditional clinical environment in all subscale items when compared Senior I and II groups. Clinical learning needs were better met in VS for self-efficacy and teaching-learning dyad and within manikin-based simulation for teaching-learning dyad for the Junior II group when compared to Senior I group. The Senior II group perceived clinical learning needs were better met in VS and manikin-based simulation for the teaching-learning dyad when compared to the Senior I group. No Senior I or other group differences were noted as significant. For this analysis, a Bonferroni adjustment was calculated with a new alpha level of less than 0.013 recognized as a level of significance.

Table 5.

Comparisons of the nursing students' perceived learning needs when grouped by specific level/term within the degree program of study.

Traditional clinical Junior I
(n = 10)
Junior II
(n = 36)
Test statistic
p-Value
Md Md Z
Communication 20.0 5.50 −2.62 0.010
Holism 30.0 12.0 −2.82 0.005
Critical thinking 10.0 3.0 −2.80 0.006
Teaching-learning dyad 25.0 6.0 −2.79 0.007



Manikin-based simulation and screen-based simulation Junior I
(n = 9)
Senior I
(n = 26)
Test statistic
p-Value
Mean rank Mean rank Z
Critical thinking 8.0 3.0 −3.01 0.002
Self-efficacy 15.0 10.5 −2.50 0.012



Manikin-based simulation
Screen-based simulation
Junior II
(n = 35)
Senior I
(n = 26)
Test statistic
p-Value
Mean rank Mean rank Z
Self-efficacy 14.0 10.5 −2.67 0.006
Teaching-learning dyad (MBS) 10 6.0 −2.57 0.010
Teaching-learning dyad (SBS) 15.0 9.50 −2.69 0.007



Manikin-based simulation
Screen-based simulation
Senior II
(n = 15)
Senior I
(n = 26)
Test statistic
p-Value
Mean rank Mean rank Z
Teaching-learning dyad (MBS) 9.50 6.00 −2.87 0.004
Teaching-learning dyad (SBS) 14.5 9.50 −2.53 0.011

Mann-Whitney U with Bonferroni adjustment with p-value set at <0.013 for the level of significance.

Discussion

The coronavirus pandemic necessitated the immediate academic and clinical closure which impacted nursing schools across the globe. This multisite research study sought to expand knowledge on understanding students' perceptions of VS experiences meeting their learning needs when compared to manikin-based simulation and traditional clinical environments during the pandemic. Focus was placed on the type of degree program and the level/term students were in during their prelicensure program.

Overall, demographic data showed participants (88.6%) were comfortable with technology. Only 9.2% of students reported a neutral or uncomfortable level with technology. Padilha et al. (2018) showed a moderate correlation between students' perceived ease of usefulness of a technologically innovative virtual simulator and students' intention to use the innovation. In contrast to Padilha et al. (2018), Cobbett and Snelgrove-Clarke (2016) found anxiety levels were significantly higher in the VS group than the manikin-based group for third-year nursing students. Our participants' comfort level with technology suggests students were more prepared and, thus more likely, motivated to use VS. Alternatively, those participants who reported an uncomfortable level with technology might have experienced anxiety when transitioning to VS. This information provides context for nurse educators and administrators when considering the use of VS.

Our findings are consistent with the NLN Jeffries Simulation Theory (Jeffries et al., 2016), whereby the educational strategies of the simulation resulted in participant outcomes. In our study these outcomes were the learner reactions as they perceived their learning needs being met. When all students were grouped as a whole by degree program of study, the traditional clinical environment prevailed as the gold standard as they perceived gains in knowledge, practice of skills of communication, nursing process, holistic care, critical thinking, and development of self-efficacy. The only subscale that was not perceived as being met in the traditional clinical environment was the teaching-learning dyad subscale. This subscale item consists of content focused on an instructor's availability, feeling supported by an instructor or peers, feeling challenged, and receiving immediate feedback on performance (Leighton, 2018; Leighton, 2021). Duff et al. (2016) noted how participants find VS favorable due to the immediate and timely feedback received and the ability to practice diagnostic reasoning skills in a safe environment. This immediate and timely feedback might not consistently occur within the traditional clinical environment.

When the data was examined for differences among the different degree programs of study, the MENP program felt the traditional clinical environment met their needs better as compared to the students enrolled in the TBSN and FTBSN programs. Virtual simulation was offered for the first time for the MENP students. We speculate this could have impacted students' reactions to VS. Although the literature shows gains in knowledge, skills, and attitudes for VS, the participants in these studies, including the landmark NCSBN study, were undergraduate pre-licensure students (Cobbett & Snelgrove-Clarke, 2016; Foronda et al., 2018; Hayden et al., 2014; Mabry et al., 2020; Powers, 2020). This demonstrates that a knowledge gap exists concerning MENP programs use of VS. This has implications for nursing programs with graduate entry. MENP programs should consider focusing on student practice experiences within the traditional clinical environment until future research is completed for these programs. Findings suggest that the manikin-based and VS environments meet TBSN and FTBSN students' learning needs similar to the traditional learning environment. As seen in the literature, learners in these programs might find the VS engaging and favorable as learning formats that provide a means to acquire knowledge (Foronda et al., 2020; Powers, 2020).

When all student participants were grouped by level/term within the nursing program, significant findings were noted to cross the traditional and simulation environments for critical thinking and the teaching-learning dyad (Table 4). Our study findings are consistent with the current VS literature related to student satisfaction in learning with VS (Duff et al., 2016; Foronda et al., 2020; Padilha et al., 2018). Additionally, Kirkpatrick Partners (2020) notes how gains in knowledge and skills occur when students are satisfied with the learning. Spanning from Junior I to Senior II, student participants felt the teaching-learning dyad met their learning needs with both manikin-based simulation and VS. These findings support the continued use of manikin-based simulation and the consideration of VS to teach students at each level/term within a nursing program. The critical thinking subscale consists of two items which relate to students anticipating and recognizing a change in a patient's clinical status and acting prior to or when a change occurs (Leighton, 2015). Critical thinking is traditionally viewed as a cognitive process used to analyze information and is a crucial skill for clinical reasoning (Victor-Chmil, 2013). Findings from this study support the use of VS to meet learning needs related to critical thinking. The scoping review published by Duff et al. (2016) revealed that VS is a suitable learning methodology that can positively impact diagnostic reasoning skills as either a standalone simulation experience or combined with manikin-based simulation. Furthermore, students felt VS was more realistic than using standardized patients/manikin-based simulation due to the VS software's ability to create accurate physical findings.

The VS environment also provided students at all learning levels/terms (Junior I to Senior II) with a mechanism to learn and/or practice the nursing process and develop self-efficacy (Table 4). The nursing process subscale consists of six items ranging from understanding a patient's pathophysiology and treatment plan to prioritizing and implementing a patient care plan. The self-efficacy subscale consists of four items ranging from students reacting calmly to a patient's change in condition to feeling confident in the ability to make decisions (Leighton, 2021). According to Foronda et al. (2018), students found VS relevant to a nurse's role and effective/realistic in enhancing learning. Additionally, about half of the students felt VS was most useful as a clinical make-up, and 12% thought it would be helpful instead of manikin-based simulation. Research conducted by Sapiano et al. (2018) noted how VS allowed for gained knowledge and student performance when providing care to a deteriorating client. Virtual simulation experiences seem to be an effective modality for educators to incorporate into the curriculum. Students can learn skills, gain self-confidence, and support diagnostic reasoning used during the nursing process (Bayram & Caliskan, 2019; Duff et al., 2016). However, findings suggest that faculty need to consider the level/term students are in when deciding on which simulation learning environment will best meet the learner outcomes. Study findings also provide additional support for educators to identify the correct educational modality for the curricular content, scenario objectives, and learner level (Duff et al., 2016; INACSL Standards Committee, 2016b).

Limitations

There are several limitations to this study. The study sites did not all use the same VS experiences, and two study sites used a combination of different VS platforms. Study sites A and C transitioned their clinical learning experiences to existing vendor educational platforms. Study site B transitioned to the use of diverse virtual learning platforms offered by simulation societies using free virtual resources during COVID-19 and from the National League for Nursing (2014) Advancing Care and Excellence Case Study series. The low response rate among all three study sites reduces the generalizability of the findings. It is also noted that the sample size can impact the reliability data. However, this data was added with the intent of sharing findings for the CLECS 2.0 instrument. COVID-19 caused one of the study sites to implement VS for the first time, whereas the other study sites were already using VS even though it was being used in small doses. The quick implementation of VS could have impacted student perceptions as it was a completely new learning modality. Some students chose “not available,” which contributed to missing data by the number of responses for the manikin-based and VS learning experiences.

Implications for nursing education

Although this study's findings show traditional clinical as the “gold standard” in meeting students learning needs, both manikin-based and VS experiences did allow students to practice holism, teaching-learning, communication, nursing process, and critical thinking skills, and develop self-efficacy. The explosion of VS in nursing education post COVID-19 will have lasting implications for how nursing programs use simulation. Virtual simulation will most likely continue to have a significant presence in nursing education. Faculty need to be intentional when integrating manikin-based or VS experiences into the curriculum. For example, the Junior I students perceived that the traditional clinical learning environments met their overall learning needs, but manikin-based and VS environments met their learning needs with critical-thinking and self-efficacy as compared to the Senior I students. Additionally, some of the CLECS 2.0 subscale items fared better in VS. The type of program, level/term of the student, and objectives of the simulation are important factors to consider and the results of this study can be used to assist nursing programs in the decision making process.

Future research

More studies are needed using the CLECS-2.0 to add to the current evidence on instrument reliability for use in VS among all nursing programs. Expansion of research is needed in exploring the specific CLECS 2.0 learning subscales in relation to VS compared to traditional clinical and manikin-based simulation. Finally, more research is needed to shift from measuring learner perceptions to learner knowledge and behavior changes for VS.

Conclusion

Nurse educators and nursing students were thrust into adopting and utilizing VS as a replacement for clinical. With limited evidence on the use of VS in nursing education, this was an opportune time to examine if VS met student perceptions if their learning needs were met when compared to traditional clinical and manikin-based simulation. All experiences provide support and feedback necessary to student learning. Findings suggest that learners perceived they can gain knowledge, skills, and confidence from the virtual-based simulation programs.

Funding

The authors did not receive any financial support from funding agencies in the public, commercial, or not-for-profit sectors.

Declaration of competing interest

The authors report no potential or actual conflict of interest related to this research.

References

  1. Bayram S.B., Caliskan N. Effect of a game-based virtual reality phone application on tracheostomy care education for nursing students: A randomized controlled trial. Nurse Education Today. 2019;79:25–31. doi: 10.1016/j.nedt.2019.05.010. https//doi.org//S0260-6917(18)30524-0 [DOI] [PubMed] [Google Scholar]
  2. Bonito S.R. The usefulness of case studies in a virtual clinical environment (VCE) multimedia courseware in nursing. The Journal of Medical Investigation. 2019;66(1.2):38–41. doi: 10.2152/jmi.66.38. [DOI] [PubMed] [Google Scholar]
  3. Centers for Disease Control CDCView summary ending August 15, 2020. 2020. https://www.cdc.gov/coronavirus/2019-ncov/covid-data/covidview/past-reports/08212020.html
  4. Cobbett S., Snelgrove-Clarke E. Virtual versus face-to-face clinical simulation in relation to student knowledge, anxiety, and self-confidence in maternal-newborn nursing: A randomized controlled trial. Nurse Education Today. 2016;45:179–184. doi: 10.1016/j.nedt.2016.08.004. [DOI] [PubMed] [Google Scholar]
  5. Dewart G., Corcoran L., Thirsk L., Petrovic K. Nursing education in a pandemic: Academic challenges in response to COVID-19. Nurse Education Today. 2020;92:104471. doi: 10.1016/j.nedt.2020.104471. (https://doi.org/S0260-6917(20)30510-4) [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Duff E., Miller L., Bruce J. Online virtual simulation and diagnostic reasoning: A scoping review. Clinical Simulation in Nursing. 2016;12(9):377–384. doi: 10.1016/j.ecns.2016.04.001. [DOI] [Google Scholar]
  7. Evaluating Healthcare Simulation Instrument request form. 2020. https://form.jotform.com/73267412745156
  8. Foronda C.L., Fernandez-Burgos M., Nadeau C., Kelley C.N., Henry M.N. Virtual simulation in nursing education: A systematic review spanning 1996–2018. Simulation in Healthcare. 2020, February;15(1):46–54. doi: 10.1097/SIH.0000000000000411. [DOI] [PubMed] [Google Scholar]
  9. Foronda C.L., Swoboda S.M., Henry M.N., Kamau E., Sullivan N., Hudson K.W. Student preferences and perceptions of learning from vSIM for nursing. Nurse Education in Practice. 2018;33:27–32. doi: 10.1016/j.nepr.2018.08.003. (https://doi.org/S1471-5953(17)30559-0) [DOI] [PubMed] [Google Scholar]
  10. Hayden J.K., Smiley R.A., Alexander M., Kardong-Edgren S., Jeffries P.R. The NCSBN national simulation study: A longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education. Journal of Nursing Regulation. 2014;5(2):C1–S64. https://www.ncsbn.org/JNR_Simulation_Supplement.pdf [Google Scholar]
  11. INACSL Standards Committee INACSL standards of best practice: SimulationSM glossary. Clinical Simulation in Nursing. 2016, December;12(S):S5–S12. doi: 10.1016/j.ecns.2016.09.0053. [DOI] [Google Scholar]
  12. INACSL Standards Committee INACSL standards of best practice: SimulationSM Simulation design. Clinical Simulation in Nursing. 2016, Decemer;12(S):S5–S12. doi: 10.1016/j.ecns.2016.09.005. [DOI] [Google Scholar]
  13. Jeffries P.R., Rodgers B., Adamson K.A. In: The NLN Jeffries simulation theory. Jeffries P.R., editor. Wolters Kluwer; 2016. NLN Jeffries simulation theory: Brief narrative description; pp. 39–42. [Google Scholar]
  14. Kirkpatrick Partners The New World Kirkpatrick Model. 2020. https://www.kirkpatrickpartners.com/Our-Philosophy/The-New-World-Kirkpatrick-Model
  15. Leigh J., Vasilica C., Dron R., Gawthorpe D., Burns E., Kennedy S.…Croughan C. Redefining undergraduate nurse teaching during the coronavirus pandemic: Use of digital technologies. British Journal of Nursing (Mark Allen Publishing) 2020;29(10):566–569. doi: 10.12968/bjon.2020.29.10.566. [DOI] [PubMed] [Google Scholar]
  16. Leighton K. Development of the clinical learning environment comparison survey. Clinical Simulation in Nursing. 2015;11(1):44–51. doi: 10.1016/j.ecns.2014.11.002. [DOI] [Google Scholar]
  17. Leighton K. Clinical learning environment comparison survey. 2018. https://sites.google.com/view/evaluatinghealthcaresimulation/clecs
  18. Leighton K. Clinical learning environment comparison survey 2.0. Evaluating Healthcare Simulation website. 2021. sim-eval.org
  19. Lioce L., et al. 2nd ed. Agency for Healthcare Research and Quality; Rockville, MD: 2020. Healthcare simulation dictionary. (AHRQ publication no. 20-0019) [DOI] [Google Scholar]
  20. Mabry J., Lee E., Roberts T., Garrett R. Virtual simulation to increase self-efficacy through deliberate practice. Nurse Educator. 2020;45(4):202–205. doi: 10.1097/NNE.0000000000000758. [DOI] [PubMed] [Google Scholar]
  21. National League for Nursing . ACE.Z Advancing care excellence for Alzheimer’s patients and caregivers series. National League for Nursing; Author: 2014. Instructor’s toolkit for Judy Jones simulations. [Google Scholar]
  22. Padilha J.M., Machado P.P., Ribeiro A.L., Ramos J.L. Clinical virtual simulation in nursing education. Clinical Simulation in Nursing. 2018;15:13–18. doi: 10.1016/j.ecns.2017.09.005. [DOI] [Google Scholar]
  23. Powers K. Bringing simulation to the classroom using an unfolding video patient scenario: A quasi-experimental study to examine student satisfaction, self-confidence, and perceptions of simulation design. Nurse Education Today. 2020;86:104324. doi: 10.1016/j.nedt.2019.104324. (https://doi.org/S0260-6917(19)31123-2) [DOI] [PubMed] [Google Scholar]
  24. Sapiano A.B., Sammut R., Trapani J. The effectiveness of virtual simulation in improving student nurses’ knowledge and performance during patient deterioration: A pre and post test design. Nurse Education Today. 2018;62:128–133. doi: 10.1016/j.nedt.2017.12.025. [DOI] [PubMed] [Google Scholar]
  25. Smiley R.A. Survey of simulation use in prelicensure nursing programs: Changes and advancement, 2010–2017. Journal of Nursing Regulation. 2019;9(4):48–61. [Google Scholar]
  26. Victor-Chmil J. Critical thinking versus clinical reasoning versus clinical judgment: Differential diagnosis. Nurse Educator. 2013;38(1):34–36. doi: 10.1097/NNE.0b013e318276dfbe. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Professional Nursing are provided here courtesy of Elsevier

RESOURCES