Abstract
Objectives
The skill of the debriefer is known to be the strongest independent predictor of the quality of simulation encounters yet educators feel underprepared for this role. The aim of this review was to identify frameworks used for debriefing team-based simulations and measures used to assess debriefing quality.
Methods
We systematically searched PubMed, CINAHL, MedLine and Embase databases for simulation studies that evaluated a debriefing framework. Two reviewers evaluated study quality and retrieved information regarding study methods, debriefing framework, outcome measures and debriefing quality.
Results
A total of 676 papers published between January 2003 and December 2017 were identified using the search protocol. Following screening of abstracts, 37 full-text articles were assessed for eligibility, 26 studies met inclusion criteria for quality appraisal and 18 achieved a sufficiently high-quality score for inclusion in the evidence synthesis. A debriefing framework was used in all studies, mostly tailored to the study. Impact of the debrief was measured using satisfaction surveys (n=11) and/or participant performance (n=18). Three themes emerged from the data synthesis: selection and training of facilitators, debrief model and debrief assessment. There was little commonality across studies in terms of participants, experience of faculty and measures used.
Conclusions
A range of debriefing frameworks were used in these studies. Some key aspects of debrief for team-based simulation, such as facilitator training, the inclusion of a reaction phase and the impact of learner characteristics on debrief outcomes, have no or limited evidence and provide opportunities for future research particularly with interprofessional groups.
Keywords: simulation, debriefing, validity, frameworks, interprofessional
Background
In simulation learning, debriefing—‘a discussion between two or more individuals in which aspects of a performance are explored and analysed with the aim of gaining insights that impact the quality of future clinical practice’1is key, and the skill of the debriefer is the strongest independent predictor of overall quality of simulation encounters.2 In a conceptual paper, Haji et al 3 argued for a distinction between simulation-based and simulation-augmented medical education, with the latter integrating the simulation learning with other educational experiences. This approach also places simulation mainstream, rather than as a special event for the privileged few. While simulation-based education is laudable, simulation is an expensive resource especially when used for small group learning. We therefore need to ensure that learning opportunities are optimised when simulation is used.
Effective interprofessional working is important for standards of patient care and is thought to be highly influenced by the attitudes of healthcare professionals.4–6 However, a report from the Centre for the Advancement of Interprofessional Education highlights that many educators feel underprepared in interprofessional, as compared with uniprofessional, settings and recommends that all facilitators receive comprehensive orientation, preparation and ongoing support for Inter Professional Education (IPE).7 Interprofessional team-based simulation allows learning opportunities within the correct educational and professional context8 and has been shown to improve communication skills and understanding of professional roles.7 However, debriefing interprofessional groups brings its own unique challenges due to learner differences in background, experience and professional identity9 requiring faculty to be trained appropriately to debrief interprofessional issues in an effective manner.8
Dreifuerst10 used concept analysis methods to identify defining attributes of debriefing as it relates to simulation to construct model, borderline and contrary cases and to distinguish between unstructured, structured for critique and structured for reflection approaches to debrief. This is a useful addition to our understanding of debriefing but has yet to be subjected to empirical testing. Previous systematic reviews have focused on the advantages of debrief over no debrief and whether the use of video improves the debrief1 11; however, there is a lack of research exploring the evidence base underpinning decisions about debriefing. The main aims of this study were to identify: (1) frameworks used for debriefing interprofessional and uniprofessional team-based simulations, (2) metrics that have been developed to assess the quality of debriefing and (3) evidence gaps for debrief decisions. The term ‘debriefing framework’ is used to refer to the structure used for the debriefing discussion.
Methods
Design
A systematic review was conducted following the procedures set out by the Centre for Reviews and Dissemination,12 whereby specific search terms are used in database searching and papers are selected based on an explicit inclusion and exclusion criteria. We also undertook hand searching of references and sought to identify records through other sources (eg, Google Scholar) in an attempt to include as many relevant papers as possible in the review. We aimed to identify:
Debriefing frameworks used for team-based (uniprofessional or interprofessional) simulation.
Measures to assess the quality of debriefing.
Search strategy
Four electronic databases were searched in December 2017: PubMed, CINAHL, MedLine and Embase. All peer-reviewed articles published in English between January 2003 and December 2017 were eligible for inclusion. Our preliminary searches identified many papers that were not relevant. This 15-year window was decided on for pragmatic reasons and because no relevant papers providing empirical data regarding team-based debriefing were identified prior to this date. As initial searches had identified excessive numbers of papers with either ‘framework’ or ‘method’ in the title or abstract, we refined search terms and ran a further search using the keywords: ‘Simulation’ AND (‘Debrief* OR Feedback’) AND ‘Evaluation’ AND (‘Quality OR Framework OR Method’).
Empirical studies and framework/development studies were included in the review, providing some form of outcome measure was used. Outcome measures assessed quality of the debriefing and/or performance of participants. All included studies used team-based simulation and examined technical and non-technical skills. Studies not published in English focused on individual debriefing and describing only the quality of the simulation (and not including quality or outcome of the debrief) were excluded.
Quality appraisal
Papers were assessed using the Kmet et al 13 quality appraisal tool. The initial appraisal was conducted by two of the authors, with a third author meeting to discuss any differences in the scoring (RE, TG, AO and SD). Any discrepancies in scoring were discussed until consensus was reached.
Results
A total of 676 citations were screened; the Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow chart summarises the review process (figure 1). Abstracts were reviewed for 253 papers; 41 (6.1%) were found to meet the study criteria after review of titles and abstracts by two authors (RE and AO or RE and SD). There were no disagreements on inclusion of papers. The remaining 41 full articles were interrogated and assessed for eligibility; 11 were excluded (including concept analysis, application of a theoretical framework and commentary papers).
Figure 1.
PRISMA flow chart. PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses.
A total of 26 papers met the full inclusion criteria and were appraised. Eight papers were excluded from the data synthesis due to a low-quality appraisal score (<0.60); this is common in narrative reviews to ensure synthesis of papers of suitable and comparable quality and that recommendations for future practice are not based on low-quality evidence.13 Tables 1 and 2 show the quality appraisal scores for the 26 papers reviewed.
Table 1.
Quality appraisal scores for quantitative studies
Papers | Auerback et al 41 | Boet et al 20 | Bond et al 14 | Brett-Fleegler et al 26 | Cheng et al 42 | Cooper et al 43 | Forneris et al 24 | Geis et al 22 | Grant et al 30 |
Question/objective sufficiently described? | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 2 | 1 |
Study design evident and appropriate? | 2 | 2 | 2 | 2 | 1 | 2 | 2 | 2 | 2 |
Method of subject/comparison group selection or source of information/input variables described and appropriate? | 1 | 2 | 1 | N/A | 1 | 2 | 1 | 1 | 1 |
Subject (and comparison group) characteristics sufficiently described? | 1 | 2 | 0 | N/A | 0 | 1 | 1 | 1 | 1 |
If interventional and random allocation was possible, was it described? | 0 | 2 | 2 | N/A | N/A | 2 | 1 | N/A | 1 |
If interventional and blinding of investigators was possible, was it reported? | 0 | 2 | 2 | N/A | N/A | N/A | N/A | N/A | 2 |
If interventional and blinding of subjects was possible, was it reported? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
Outcome and exposure measure(s) well defined and robust to measurement/misclassification bias? | 1 | 2 | 2 | 1 | 2 | 1 | 2 | 2 | 2 |
Sample size appropriate? | 1 | 2 | 1 | 1 | 0 | 1 | 2 | 1 | 1 |
Analytic methods described/justified and appropriate? | 1 | 2 | 1 | 2 | 0 | 1 | 2 | 2 | 2 |
Some estimate of variance is reported for the main results? | 1 | 2 | 2 | 2 | 0 | 0 | 2 | 2 | 2 |
Controlled for confounding? | 0 | 2 | 0 | 1 | 0 | 1 | 0 | 1 | 0 |
Results reported in sufficient detail? | 1 | 2 | 1 | 2 | 1 | 1 | 2 | 2 | 2 |
Conclusions supported by the results? | 1 | 2 | 1 | 2 | 1 | 1 | 1 | 2 | 2 |
Summary score | 0.46 | 1.00 | 0.65 | 0.83 | 0.32 | 0.58 | 0.71 | 0.82 | 0.73 |
Papers | Hull et al 17 | Kable et al 44 | Kim et al 19 | Kolbe et al 18 | Kuiper et al 45 | Lammers et al 15 | LeFlore and Anderson23 | Morrison and Catanzaro46 |
Question/objective sufficiently described? | 2 | 1 | 2 | 2 | 1 | 2 | 2 | 1 |
Study design evident and appropriate? | 2 | 1 | 2 | 2 | 1 | 2 | 2 | 1 |
Method of subject/comparison group selection or source of information/input variables described and appropriate? | 1 | 1 | 1 | 1 | 1 | 2 | 1 | 1 |
Subject (and comparison group, if applicable) characteristics sufficiently described? | 0 | 0 | 2 | 2 | 1 | 2 | 2 | 0 |
If interventional and random allocation was possible, was it described? | N/A | N/A | 2 | N/A | N/A | N/A | 1 | 1 |
If interventional and blinding of investigators was possible, was it reported? | N/A | N/A | 2 | N/A | N/A | N/A | 2 | N/A |
If interventional and blinding of subjects was possible, was it reported? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
Outcome and (if applicable) exposure measure(s) well defined and robust to measurement/misclassification bias? | 2 | 1 | 2 | 2 | 1 | 2 | 2 | 1 |
Sample size appropriate? | 2 | 1 | 1 | 1 | 1 | 2 | 1 | 1 |
Analytic methods described/justified and appropriate? | 2 | 2 | 2 | 1 | 2 | 2 | 2 | 1 |
Some estimate of variance is reported for the main results? | 1 | 2 | 2 | 2 | 0 | 2 | 2 | N/A |
Controlled for confounding? | N/A | 0 | 2 | 1 | 1 | 1 | 1 | N/A |
Results reported in sufficient detail? | 2 | 2 | 2 | 2 | 1 | 1 | 2 | 1 |
Conclusions supported by the results? | 2 | 1 | 2 | 2 | 1 | 2 | 2 | 1 |
Summary score | 0.80 | 0.55 | 0.92 | 0.82 | 0.50 | 0.91 | 0.85 | 0.45 |
Papers | Oikawa et al 32 | Reed28 | Savoldelli et al 21 | Smith-Jentsch et al 25 | Van Heukelom et al 27 | West et al 47 | Wetzel et al 48 | Zinns et al (2017)28 |
Question/objective sufficiently described? | 2 | 1 | 2 | 2 | 2 | 0 | 1 | 1 |
Study design evident and appropriate? | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 2 |
Method of subject/comparison group selection or source of information/input variables described and appropriate? | 1 | 1 | 2 | 2 | 2 | 0 | 1 | 1 |
Subject (and comparison group, if applicable) characteristics sufficiently described? | 0 | 0 | 2 | 1 | 1 | 0 | 1 | 0 |
If interventional and random allocation was possible, was it described? | 1 | 2 | 2 | 1 | 2 | 0 | N/A | N/A |
If interventional and blinding of investigators was possible, was it reported? | N/A | N/A | 2 | 1 | 0 | N/A | N/A | 2 |
If interventional and blinding of subjects was possible, was it reported? | 2 | 2 | 0 | 0 | N/A | N/A | N/A | N/A |
Outcome and (if applicable) exposure measure(s) well defined and robust to measurement/misclassification bias? | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 2 |
Sample size appropriate? | 1 | 1 | 2 | 1 | 2 | 1 | 1 | 1 |
Analytic methods described/justified and appropriate? | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 2 |
Some estimate of variance is reported for the main results? | 2 | 2 | 2 | 2 | 2 | N/A | 0 | 2 |
Controlled for confounding? | 1 | 1 | 2 | 1 | 1 | N/A | N/A | N/A |
Results reported in sufficient detail? | 2 | 2 | 2 | 2 | 2 | 0 | 1 | 1 |
Conclusions supported by the results? | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 |
Summary score | 0.77 | 0.77 | 0.93 | 0.75 | 0.81 | 0.25 | 0.45 | 0.68 |
Table 2.
Quality appraisal scores for qualitative studies
Papers | Bond et al 14 | Freeth et al 16 | Lammers et al 15 |
Question/objective sufficiently described? | 2 | 2 | 2 |
Study design evident and appropriate? | 2 | 2 | 2 |
Context for the study clear? | 2 | 2 | 2 |
Connection to a theoretical framework/wider body of knowledge? | 2 | 2 | 1 |
Sampling strategy described, relevant and justified? | 1 | 1 | 1 |
Data collection methods clearly described and systematic? | 2 | 1 | 2 |
Data analysis clearly described and systematic? | 2 | 2 | 1 |
Use of verification procedure(s) to establish credibility? | 2 | 2 | 2 |
Conclusions supported by the results? | 1 | 2 | 2 |
Reflexivity of the account? | 1 | 1 | 2 |
Summary score | 0.85 | 0.85 | 0.85 |
A total of 18 papers were included: 1 qualitative study, 15 quantitative studies and 2 studies containing both qualitative and quantitative components. The quantitative Kmet scores ranged between 65%–100%; the two mixed methods papers14 15 and the qualitative paper16 scored 85%. Summary of the 18 included studies is provided at table 3.
Table 3.
Summary of studies included in the narrative synthesis
Reference, country | Aim | Study design | Participants and sample | Findings | |
1 | Boet et al,20
Canada |
Compare effectiveness of an interprofessional within-team debriefing with instructor-led debriefing on team performance during simulated crisis. | Randomised, controlled, repeated measures design. Teams randomised to within-team or instructor-led debriefing groups. After debriefing, teams managed different post-test crisis scenario. Sessions were video taped, and blinded expert examiners used TEAM scale to assess performance. | n=120 (40 teams made up of 1 anaesthesia trainee, 1 surgical trainee, 1 staff circulating operating room nurse). | Team performance significantly improved from pretest to post-test, regardless of type of debriefing (F1,38=7.93, p=0.008). No significant difference in improvement between within-team or instructor-led debriefing. |
2 | Bond et al,14
USA |
To assess learner perception of high-fidelity mannequin-based simulation and debriefing to improve understanding of ‘cognitive dispositions to respond’ (CDRs). | Emergency medicine (EM) residents exposed to two simulations and block-randomised to technical/knowledge debriefing before completing written survey and interview with ethnographer. Four investigators reviewed interview transcripts and qualitatively analysed comments. | n=62 EM residents. | Technical debriefing was better received than cognitive debriefing. Authors theorise that an understanding of CDRs can be facilitated through simulation training. |
3 | Brett-Fleegler et al,26
USA |
Examine reliability of Debriefing Assessment for Simulation in Healthcare (DASH) scores in evaluating quality of healthcare simulation debriefings and whether scores demonstrate evidence of validity. | Rater trainees familiarised with DASH before watching, rating and then discussing three separate course introductions and subsequent debriefings. Inter-rater reliability, intraclass correlations and internal consistency were calculated. | n=114 international healthcare educators participated in 4.5-hour web-based interactive DASH rater training sessions (nurses, physicians, other health professionals and masters and PhD educators). | Differences between the ratings of the three standardised debriefings were statistically significant p<0.001. DASH scores showed evidence of good reliability and preliminary evidence of validity. |
4 | Forneris et al,24
USA |
To investigate the impact of Debriefing for Meaningful Learning (DML) on clinical reasoning. | Quasiexperimental pretest and post-test repeated measure design. Teams randomly assigned to DML or usual debriefing. Clinical reasoning was evaluated using the Health Sciences Reasoning Test (HSRT). | n=153 Under Graduate (UG) nursing students (teams of 4). | Significant improvement in HSRT mean scores for the intervention group (p=0.03) with control group non significant (NS). The change in HSRT mean scores between the intervention and control groups was not significant (p=0.09). |
5 | Freeth et al,16
UK |
Examination of participants perceptions of the multidisciplinary obstetric simulated emergency scenarios course (MOSES) designed to enhance Non Technical Skills (NTS) among obstetric teams/improve patient safety. | Telephone (47) or email (8) interviews with MOSES course participants and facilitators and analysis of video-recorded debriefings. | n=93 (senior midwives n=57, obstetricians n=21, obstetric anaesthetists n=15). | Many participants improved their knowledge and understanding of interprofessional team working, especially communication and leadership in obstetric crisis situations. Participants with some insight into their non-technical skills showed the greatest benefit in learning. Interprofessional simulation is a valuable approach to enhancing non-technical skills. |
6 | Geis et al,22USA | Define optimal healthcare team roles and responsibilities, identify latent safety threats within the new environment and screen for unintended consequences of proposed solutions. | Prospective pilot investigation using laboratory and in situ simulations totalling 24 critical patient scenarios conducted over four sessions (over 3 months). | n=81 healthcare providers (predominantly nurses, paramedics and physicians). | Mayo High Performing Team Scale (MHPTS) means were calculated for each phase of training. Simulation laboratory teamwork scores showed a mean of 18.1 for the first session and 18.9 for the second session (p=0.68). In situ teamwork scores showed a mean of 12.3 for the first session and 15 for the second session (p=0.25). Overall laboratory mean was 18.5 (SD 2.31) compared with overall in situ mean of 13.7 (SD 4.40), indicating worse teamwork during in situ simulation (p=0.008). |
7 | Grant et al,30
USA |
To compare the effectiveness of video-assisted oral debriefing (VAOD) and oral debriefing alone (ODA) on participant behaviour. | Quasiexperimental pretest and post-test design. Teams were randomised to intervention (VAOD) or control (ODA). Behaviours were assessed using adapted Clinical Simulation Tool. | n=48 UG nursing students: 24 intervention and 24 control (teams of 4 or 5 students). | The VAOD group had higher mean score (6.62, SD 6.07) than the control group (4.23, SD 4.02), but this did not reach significance (p=0.11). |
8 | Hull et al,17
UK |
To explore the value of 360° evaluation of debriefing by examining expert debriefing evaluators, debriefers and learners’ perceptions of the quality of interdisciplinary debriefings. | Cross-sectional observational study. The quality of debriefing was assessed using the validated Objective Structured Assessment of Debriefing framework. |
n=278 students, in 41 teams. | Expert debriefing evaluators and debriefers’ perceptions of debriefing quality differed significantly; debriefers perceived the quality of debriefing they provided more favourably than expert debriefing evaluators. Learner perceptions of the quality of debriefing differed from both expert evaluators and debriefers’ perceptions. |
9 | Kim et al,19
Korea |
To compare the educational impact of two postsimulation debriefing methods: (focused and corrective feedback (FCF) versus structured and supported debriefing (SSD)) on team dynamics in simulation-based cardiac arrest team training. | A pilot randomised controlled study. Primary outcome: improvement in team dynamics scores between baseline and test simulation. Secondary outcomes: improvements in team clinical performance scores, self-assessed comprehension of and confidence in cardiac arrest management and team dynamics. |
N=95 4th year UG medical students randomly assigned to FCF or SSD; teams of 6. | The SSD team dynamics score post-test was higher than at baseline (baseline: 74.5 (65.9–80.9), post-test: 85.0 (71.9–87.6), p=0.035). Scores for the FCF group did not improve from baseline to post-test. No differences in improvement in team dynamics or team clinical performance scores between the two groups (p=0.328, respectively). |
10 | Kolbe et al
18 2013 Switzerland |
To describe the development of an integrated debriefing approach and demonstrate how trainees perceive this approach. | Post-test-only (debriefing quality) and a pretest and post-test (psychological safety and leader inclusiveness), no-control group design. Debriefing administered during a simulation-based combined clinical and behavioural skills training day for anaesthesia staff (doctors and nurses). Each trainee participated and observed in four scenarios and also completed a self-report debriefing quality scale. |
n=61 (f4 senior anaesthetists, 29 residents, 28 nurses) from a teaching hospital in Switzerland participated in 40 debriefings resulting in 235 evaluations. All attended voluntarily and participated in exchange for credits. | Utility of debriefings evaluated as highly positive, while pre–post comparisons revealed psychological safety and leader inclusiveness increased significantly after debriefings. |
11 | Lammers et al,15
USA |
To identify causes of errors during a simulated, prehospital paediatric emergency. | Quantitative (cross-sectional, observation) and qualitative research. Crews participated in simulation using own equipment and drugs. Scoring protocol used to identify errors. Debriefing conducted by trained facilitator immediately after simulated event elicited root causes of active and latent errors. | n=90 (m=67%, f=33%) Two-person crews (45 in total) made up of: Emergency Medicine Technician (EMT)/paramedic, paramedic/paramedic, paramedic/specialist. |
Simulation, followed immediately by facilitated debriefing, uncovered underlying causes of active cognitive, procedural, affective and teamwork errors, latent errors and error-producing conditions in EMS paediatric care. |
12 | LeFlore and Anderson,23 USA | To determine whether self-directed learning with facilitated debriefing during team-simulated clinical scenarios has better outcomes compared with instructor-modelled learning with modified debriefing. | Participants randomised to either the self-directed learning with facilitated debriefing group (group A: seven teams) or instructor-modelled learning with modified debriefing group (group B: six teams). Tools assessed students’ pre/post knowledge (discipline-specific), satisfaction (5-point Likert scale/open-ended questions), technical and team behaviours. | Convenience sample of students; nurse practitioner, registered nurse, social work, respiratory therapy. Thirteen interdisciplinary teams participated, with one student from each discipline per team. | Group B was significantly more satisfied than group A (p=0.01). Group B registered nurses and social worker students were significantly more satisfied than group A (30.0±0.50 vs 26.2±3.0, p = 0.03 and 28.0±2.0 vs 24.0±3.3, p=0.04, respectively). Group B had significantly better scores than group A on 8 of the 11 components of the Technical Evaluation Tool; group B intervened more quickly. Group B had significantly higher scores on 8 of 10 components of the Behavioral Assessment Tool and overall team scores. |
13 | Oikawa et al,32
USA |
To determine if learner self-performance assessment (SPA) and team-performance assessment (TPA) were different when simulation-based education (SBE) was supported by self-debriefing (S-DB), compared with traditional facilitator-led debriefing (F-DB). | Prospective, controlled cohort intervention study. Primary outcome measures: SPA and TPA assessed using bespoke global rating scales with subdomains: patient assessment, patients treatment and teamwork. |
n=57 postgraduate year 1 medical interns randomised to 9 F-DB and 10 S-DB. Teams completed four sequential scenarios. |
Learner SPA and TPA scores improved overall from the first to the fourth scenarios (p<0.05). F-DB versus S-DB cohorts did not differ in overall SPA scores. |
14 | Reed,28
USA |
To explore the impact on debriefing experience of three types of debrief: discussion only, discussion+blogging and discussion+journalling. | Experimental design with random assignment. Primary outcome measure: Debriefing Experience Scale (DES). |
n=48 UG nursing students randomly assigned to ‘discussion’, ‘blogging’ or ‘journaling’. | DES score highest for discussion only, followed by journaling and then blogging. Differences reached statistical significance for only 3 of the 20 DES items. |
15 | Savoldelli et al 21 | To investigate the value of the debriefing process during simulation and to compare the educational efficacy of oral and videotape-assisted oral feedback against no debriefing (control). | Prospective, randomised, controlled, three-arm, repeated measures study design. After completing pretest scenario, participants randomly assigned to control, oral or videotape-assisted oral feedback condition. Debrief focused on non-technical skills performance followed by a post-test scenario. Trained evaluators scored participants using Anaesthesia Non-Technical Skills scoring system. Video tapes reviewed by two blinded independent assessors to rate non-technical skills. | n=42 anaesthesia residents in postgraduate years 1, 2 and 4. | Statistically significant improvement in non-technical skills for both oral and videotape-assisted oral feedback groups (p<0.005) but no difference between groups or improvement in control group. The addition of video review did not provide any advantage over oral feedback alone. |
16 | Smith-Jentsch et al,25 USA | To investigate the effects of guided team self-correction using an expert model of teamwork as the organising framework. | Study 1: cohort design with data collected over 2 years. Year 1: data on 15 teams collected using existing Navy method of prebriefing and debriefing. Instructors then trained using guided team self-correction method. Year 2: data collected on 10 teams, briefed and debriefed by instructors trained from year 1. Study 2: teams were randomly assigned to the experimental or control condition. |
Study 1: n=385 male members of 25 US Navy submarine attack centre teams, teams ranged from 7 to 21 in size. Study 2: n=65 male lieutenants in the US Navy, randomly assigned to five-person teams. | Teams debriefed using expert model-driven guided team self-correction approach developed more accurate mental models of teamwork (study 1) and demonstrated greater teamwork processes and more effective outcomes (study 2). |
17 | Van Heukelom et al,27 USA | To compare two styles of managing a simulation session: postsimulation debriefing versus insimulation debriefing. | Observational study with a retrospective pre–post survey (using 7-point Likert scale) of student confidence levels, teaching effectiveness of facilitator, effectiveness of debriefing strategy and realism of simulation. Participants randomly assigned to either postsimulation or insimulation debriefing conditions. | n=160 students (third year medical students enrolled in the ‘Clinical Procedures Rotation’). | Statistically significant differences between groups. Students in the postsimulation debriefing ranked higher in measures for effective learning, better understanding actions and effectiveness of debrief. |
18 | Zinns et al,29
USA |
To create and assess the feasibility of a post resuscitation debriefing framework (Review the event, Encourage team participation, Focused feedback, Listen to each other, Emphasize key points, Communicate clearly, Transform the future - REFLECT). | Feasibility pretest and post-test study. Outcome measure: presence of REFLECT components as measured by the paediatric emergency medicine (PEM) fellows, team members and blinded reviewers. |
n=9 PEM fellows completed the REFLECT training (intervention) and led teams of 4. | Significant improvement in overall use of REFLECT reported by PEM fellows (63% to 83%, p<0.01) and team members (63% to 82%, p<0.001). Blinded reviewers found no statistical improvement (60% to 76%, p=0.09). |
Demographics
There were 2013 participants across the 18 studies (range 9–450). Twelve studies were conducted in the USA, 2 of which14 15 contained both qualitative and quantitative components, with the remaining 10 comprising quantitative data only. The remaining quantitative studies were conducted in the UK,17 Switzerland,18 Korea19 and the remaining two in Canada.20 21 The only wholly qualitative paper included in the review was conducted in the UK.16
Seven studies were conducted with interprofessional teams and four of these examined differences between the professional groups.16 18 22 23 Geis et al 22 used simulation to model how a new paediatric emergency department would function and to identify latent safety threats; debriefing was structured and included video review. Changes in workload for different professional groups were analysed as the simulated workload of the department changed. LeFlore and Anderson et al 23 compared two approaches to interprofessional team simulation and debriefing; changes in knowledge test scores and satisfaction with the simulation/debrief were reviewed by professional group. In the Freeth et al 16 qualitative study, some excerpts from interviews identified participants by professional group, but there was no comparison between groups. Kolbe et al 18 found that evaluation of their debriefing model—TeamGAINS—did not differ by job role (nurse or doctor).
Debriefing frameworks
All studies included a structured debriefing framework, mostly tailored to the individual study (see table 4). Five authors used a previously validated framework: the Ottawa Global Rating Scale,20 TeamGAINS,18 Debriefing for Meaningful Learning,24 Structured and Supported Debriefing19 and Guided Team Self Correction (GTSC).25 In 11 studies, outcome measures were used to assess debrief quality (faculty behaviours)14 15 17 18 22–24 26–29 and in 12 studies change in performance following the debrief was measured (participant behaviours).16 18 20–25 30–32
Table 4.
Debriefing frameworks and measures used in the 18 studies
Reference | Debriefing framework | Outcome measure | |
Quality of debrief | Participant performance | ||
Boet et al 20 | Ottawa Global Rating Scale | Team Emergency Assessment Measure | |
Bond et al 14 | Technical/knowledge (B). Cognitive (B). |
Survey/ interview (B). |
|
Brett-Fleegler et al 26 | Debrief framework to show (i) superior, (ii) average and (iii) poor debriefing (B). | DASH | |
Freeth et al 16 | Structured (B). | Kirkpatrick framework adapted for IPE. | |
Forneris et al 24 | Debriefing for Meaningful Learning. | DASH | Health sciences reasoning test. |
Geis et al 22 | Structured (B). | Survey (B). | Mayo high performance teamwork scale. |
Grant et al 30 | Video-assisted oral debriefing (B). Oral debriefing alone (B). |
Behaviours (B). | |
Hull et al 17 | Structured (B). | OSAD | |
Kim et al (2017) | Focused and corrective feedback (B). Structured and supported debriefing |
Team dynamics. Team clinical performance. |
|
Kolbe et al 18 | TeamGAINS. | Survey based on DASH and OSAD. | Psychological safety. Leader inclusiveness. |
Lammers et al 15 | Structured (B). | Interview (B). | |
LeFlore and Anderson23 | Facilitated debrief (B). Modified debrief (B). |
Survey (B). | Knowledge assessment (B). Technical evaluation (B). Behavioural assessment. |
Oikawa et al 32 | Facilitator-led debriefing (B). Self-debriefing (B). |
Self-performance assessment (B). Team performance assessment (B). |
|
Reed28 | Discussion debrief (B). Discussion+journal (B). Discussion+blog (B). |
DES | |
Savoldelli et al 21 | Structured (B). | ANTS | |
Smith-Jentsch et al 25 | Guided team self-correction. | Mental models of teamwork (B). Teamwork processes (B). |
|
Van Heukelom et al 27 | Insimulation debriefing (B). Postsimulation debriefing (B). |
Survey (B). | Self-reported confidence (B). |
Zinns et al 29 | REFLECT (B). | REFLECT criteria (B). |
ANTS, Anaesthesia Non-Technical Skills; B, bespoke; DASH, Debriefing Assessment for Simulation in Healthcare; DES, Debriefing Experience Scale; OSAD, Objective Structured Assessment of Debriefing.
Performance measures
The majority of studies (12/18) used some measure of performance to judge the success of the debriefing framework, using a before-and-after design or comparing two debriefing frameworks (table 4). A total of 17 measures were used in the 12 studies (table 4).
Synthesis
All papers were read in full by two authors; a combination of inductive and deductive thematic analysis was used to develop codes and categories to relevant extracts and organise these findings under main thematic headings. These are presented at figure 2. Deductive codes were derived from the review aims and the inductive component allowed codes to emerge from the data. A synthesis of these findings was used to identify key themes.
Figure 2.
Evidence and evidence gaps for decisions about debrief.
Several key themes were identified through this synthesis of the findings; two authors discussed these themes until a consensus was reached. These themes were: selection and training of debrief facilitators, debrief model and assessment of debrief. The themes are discussed below; summary of the evidence, and evidence gaps, for each theme is presented at figure 2.
Selection and training of debrief facilitators
Most of the studies were conducted with a trained debrief facilitator15–18 22 24 26 29 31 32 with one research team reporting use of ‘PowerPoint plus audio’ with no indication whether the ‘audio’ was prerecorded or provided by a facilitator.14 An randomised controlled trial compared two approaches to debrief: within-team debrief, with a leader from within the team providing the debrief, and instructor-led debrief.20 Team performance, assessed using the Team Emergency Assessment Measure (TEAM),33 improved following debrief in both groups (F 1,38=7.93, p=0.008); there was no significant difference between within-team or instructor debrief (F 1,38=0.43, NS p=0.52). Oikawa et al 32 found that self-debriefing was as effective as faculty debriefing in improving self and team performance assessment across four sequential scenarios.
Different study designs make it impossible to state that one type of facilitator is superior; performance in individual studies improved when the team leader,20 instructor,15 faculty32 or team member32 led the debrief. Similarly, no studies provided evidence that training actually makes any difference.
Debrief model
The format of debriefing reported in the studies varied in three areas: degree of structure, use of video clips and timing of the debrief.
All authors described a debrief framework, with variation in the detail provided. Three authors specify an initial reaction stage (‘how was that for you?’), followed by attention to technical and/or non-technical skills and how they were performed in the simulation scenarios; Lammers et al 15 and Van Heukelom et al 27 refer to this first stage as ‘decompression’, while Kolbe et al 18 describe it as ‘reactions’. No one structure was used across studies; most authors tailored an existing debrief framework.
Training faculty to use GTSC to structure the debrief had a significant impact on overall team performance, over traditional debrief methods (t(11)=1.98, p=<0.05 (one tailed)).25 The group receiving GTSC also developed mental models more similar to those developed by an expert group. In a pretest and post-test study paediatric emergency medicine fellows were trained to use a cardiac arrest debriefing model (REFLECT) with teams of four. The fellows and team members reported significant improvement in use of REFLECT components (63 vs 82%), but blinded expert reviewers reported a non-significant improvement (60 vs 76%).29
Use of Cognitive Disposition to Respond (CDR) to structure the debrief, with technical/knowledge based debrief as the control, resulted in higher satisfaction scores for the technical/knowledge based debrief. This did not reach significance.14 LeFlore and Anderson23 compared a facilitated debrief (group A) with a modified debrief (group B) in which time for questions was allowed. However, the learning interaction was also different with group A using self-directed learning and group B observing experts completing the scenario. Group B had higher satisfaction scores, but there is no indication whether this was due to the expert modelling or the modified debrief.
Video clips were included in the debrief in seven of the studies,15 16 20–23 26 but extent of video use described by the authors was variable. In one study, the researchers compared no debrief (control) with oral debrief (intervention 1) and oral plus video debrief (intervention 2) using a pre–post design with anaesthesia residents.21 There was significant improvement in total Anaesthesia Non-Technical Skills (ANTS) score (F 2,39=6.10, p=<0.005) and scores in each of the four domains for both intervention groups but no significant difference between oral and oral+video groups on total or individual domain scores. Similarly, a pretest and post-test study comparing video-assisted debrief with oral debrief alone with nursing students reported a higher mean score on behaviour for those in the video-assisted debrief group than the control group (6.62 vs 4.23), but this did not reach significance.30
In most studies, debriefing was conducted at the end of the simulation exercise; the one exception was the study conducted by Van Heukelom et al,27 who compared insimulation debrief (identifying learning points and errors as they arise during the simulation) and postsimulation debrief. They report that self-reported confidence and knowledge improved for both groups (Spearman’s R=0.5 with p≤0.001 for all results) with no significant difference between groups. However, the postsimulation debrief group had significantly higher scores for three items on the debriefing satisfaction scale. In seven studies, participants completed a further simulation scenario following the debrief20–25 30; this is reviewed in detail below.
The studies reviewed provide evidence that debriefing frameworks can improve outcomes; however, there is no evidence that including a reaction phase or using video makes any difference to outcomes.
Assessment of the debrief
There were two approaches to assessment of debrief: assessment of debrief quality and change in performance following the debrief.
The quality of the debrief was assessed through satisfaction scores or through analysis of debrief videos. Satisfaction was rated by participants14 23 24 27 28 or faculty,26 or both.17 18 29 Kolbe et al 18 also measured psychological safety and leader inclusiveness before and after the debrief and found both measures significantly improved (t(59)=−2.26, p=0.028 and t(60)=−2.07, p=0.048). In four studies, analysis of debrief videos was conducted using an existing tool: Brett-Fleegler et al 26 used the Debriefing Assessment for Simulation in Healthcare (DASH) with 114 simulation instructors to test validity and reliability, and Lammers et al 15 used a Root Cause Analysis (RCA) framework to examine the quality of RCA processes in a simulated prehospital paediatric emergency. Hull et al 17 used Objective Structured Assessment of Debriefing (OSAD) with expert debriefing evaluators and faculty debriefing, and Zinns et al 29 used the REFLECT postresuscitation debriefing framework.
Significant improvement in performance following debrief was reported in several studies. Change in performance was assessed using: (1) a (different) simulation scenario conducted after the debrief,20–23 (2) participant knowledge, assessed using a pre/post knowledge test,25 (3) participant self-reported confidence and knowledge27 and (4) mental model accuracy.25
The postdebrief simulation performance was assessed using a range of existing measures: the Mayo High Performing Team Scale,22 the TEAM,20 ANTS,21 Behaviour Assessment Tool, based on CRM principles and validated in previous studies by the authors,23 the Health Sciences Reasoning Test,24 Team Dynamics31 and Team Clinical Performance.31 In the Geis et al study,22 the phase 1 (predebriefing) simulation was conducted in the simulation lab, and the phase 2 (postdebriefing) was conducted in the hospital, hence change in behaviour could not be attributed solely to the debrief.
Despite some studies using more than one performance measure, none of the studies reported correlations across performance measures. Where performance data were analysed in the context of demographic data items, these were mainly limited to professional group16 18 22 23 and work experience.
Discussion
There was little commonality across the papers in terms of participants, experience of faculty and measures used; however, all studies used a debriefing framework to provide structure for the debriefs often underpinned by theoretically derived methods to facilitate interaction of participants. Eighteen different debriefing frameworks were described, showing divergence in preferred debriefing techniques and strategies among the studies, but the frameworks commonly started with a ‘reaction’ or ‘decompression’ phase to encourage self/team reflection. The reaction phase assumes that participants will ‘let off steam’ during the first few minutes of a simulation debrief, which provides facilitators with content that should be discussed at some stage in the debrief but also allows participants to express their emotions straight away and provide a more balanced environment for objective reflection later in the debrief.18 None of the studies compared this reaction phase with no reaction phase so the impact is unknown. All debriefing frameworks covered either technical or non-technical aspects, or both and some studies compared participant reactions to either technical/non-technical aspects. Non-technical skills were addressed through the use of expert models such as crisis resource management principles or through techniques such as CDR and Advocacy Inquiry (AI) aimed at identifying mental models of participants, which lead to certain behaviours.14 26 Bond et al 14 found that technical debriefing was better received by participants than cognitive debriefing, although Dreifuerst34 reported that learners prefer debrief with reflection.
The debriefing model described by Kolbe and colleagues18 reflects the recommendations of several earlier authors and comprises six steps: reactions; debrief clinical component; transfer from simulation to reality; reintroduce the expert model; summarise the debriefing; and practice/improve clinical skills as required. This model, as a whole, was shown to have some benefits but our review has shown varying degrees of evidence for each of these steps, as illustrated in figure 2.
Debriefing theory
Different techniques are used to focus the debrief on individuals and team members as well as observers. Debriefing models utilised a range of theoretical techniques to facilitate interaction of the whole group through guided team self-correction, peer assessment, self and team reflection.18 23 25 30–32 Guided team self-correction and circular questioning18 25 are techniques that switch the focus to the whole team and encourage active participation and reflexivity from all members of the group. Smith-Jentsch et al developed the technique of GTSC, where members of the team are responsible for identifying their own team performance problems plus process-orientated goals for improvement.25 In GTSC, an expert model of teamwork is used as an organisational framework at the briefing and then debriefing stages when participants are asked to discuss both positive and negative examples of each component. Debriefing theory developed by Salas and colleagues makes the assumption that the use of an expert model provides a common language for participants to use during team debriefs, which helps to form shared team mental models that match the expert framework.25 35 Reflecting on both positive and negative examples of behaviour has been found to develop stronger mental models and focusing on a few critical performance issues to identify learner ‘process orientated goals’ helps to ensure that learning is not scenario specific. High-level facilitation allows participants to contribute to the majority of discussion in the debrief, which maximises individual reflection and team based reflexivity so that the learners are reaching a deeper level of understanding about the interactions which have taken place, rather than listening to expert opinion by the debriefer. With techniques such as GTSC, the debriefer facilitates from a non-judgemental perspective without expressing their own expert opinion until the latter stages of the debrief, if at all.
In contrast, AI is more instructor led where the debriefer will highlight a performance gap encountered by an individual during the simulation and use direct questioning to uncover underlying mental frames that led to certain actions or behaviours.18 26 The conceptual framework and underlying theory assumes that by exploring the mental frames or thought processes that have led to certain behaviours, the learner is able to rewire these thought processes for similar situations in the future, resulting in different actions or interactions.36
A central tenet across debriefing theories for teams is the development of a shared understanding across participants and facilitator. However, the seven studies we reviewed that were conducted with interprofessional teams did not appear to test mental model consistency across professions.
Learning environment
Creating the right environment has been eloquently described as a ‘task-relationship dilemma’36 37 between the need to provide honest feedback on the task without damaging the relationship between teacher and learner. The studies included in our review suggest that greater attention is being paid to this, as evidenced by validation of measures for the assessment of perceived psychological safety18 and in the debriefing and evaluation of satisfaction.14 23 26 27 The use of video as part of the debrief is not supported by studies included in our review; this is consistent with an earlier meta-analysis.1
Training of debriefers
The majority of studies used trained debrief facilitators to conduct the debrief, although two studies showed that self-debrief within teams was as effective as instructor-led debrief.20 32 Cheng and colleagues,1 in their systematic review of debriefing features, outcomes and effectiveness, found that there may be benefits in expert modelling, although meta-analysis of relevant studies revealed non-significant effects.
When instructors perform debriefs, insimulation debriefing does not work as well as postsimulation debriefing.27 A study examining student perceptions of debriefing38 also revealed that students prefer debriefing immediately following the simulation and that timing was more important than the debriefing model. However, comparison of studies by Cheng and colleagues1 suggest that factors such as task complexity and individual or team-based learning may be better indicators for the timing of debriefing. Further training in specific techniques such as GTSC and CDR raises the quality of debriefings, so it is important to use experienced facilitators, an agreed/previously validated debriefing framework and to supplement facilitator training with technique-specific instruction to optimise debriefing quality. Standards of best practice for simulation39 advocate that the debrief facilitator has specific training and has witnessed the simulation activity. Debriefing frameworks encourage facilitators to focus on a few critical issues, include a range of formats and address technical and cognitive aspects, non-technical skills and transfer of learning into practice.
Quality metrics
We identified four previously validated metrics used to measure the quality of debriefs: DASH, OSAD, REFLECT and DES, with DASH and OSAD the preferred metric in more than one study. These metrics use faculty, participant or objective raters to score aspects of faculty performance except the DES, which assesses participant feelings as a result of the debriefing experience. While these instruments have good evidence of reliability and validity, further studies are needed to establish validity in different contexts and compare the utility of different tools.
Integration with previous work
Previous systematic reviews have shed light on the advantages of debrief over no debrief and the lack of evidence that the use of video improves the debrief.1 11 Our review supports both of these findings. Methods of debriefing have been reviewed in previous narrative reviews2 38 and systematic reviews.1 11 Of note, Cheng and colleagues1 were only able to conduct meta-analysis on a small number of the 177 studies included in their systematic review, due to incomplete reporting by researchers. In a more theoretical approach, the defining attributes of debriefing identified by Dreifuerst10reflection, emotion, reception, and integration and assimilation10—enabled the author to identify model, borderline and contrary cases, in line with the concept analysis method.40
The main contribution of this systematic review has been to identify debriefing frameworks some of which have been validated in various contexts using theoretical approaches. However, the number of bespoke frameworks used highlights the diversity of debriefing practice and approaches to outcome measurement and that more work should be done to compare debriefing frameworks in order to develop evidence for best practice.
Implications for current practice and future research
Our review suggests that the use of a debrief framework improves debrief quality, subsequent behaviours and teamwork performance. The findings strongly support the use of a validated debrief framework by debriefers, but investment in preparation of the faculty is also important to supplement facilitator training with technique-specific instruction to optimise debriefing quality. Further research is needed to validate measures of debrief quality in different contexts and outcome measures following debriefing. The number of bespoke instruments used across the studies illustrates the difficulty with conducting reviews such as this, particularly with limitations to meta-analysis. It would be worth considering whether there are key outcomes (and associated outcome measures) that should be considered good practice for simulation research, similar to the core outcomes dataset approach being promulgated for clinical research (http://www.comet-initiative.org/).
Some key aspects of debrief for team-based simulation, such as facilitator training, the inclusion of a reaction phase and the impact of learner characteristics on debrief outcomes, have no or limited evidence and provide opportunities for future research, particularly with interprofessional groups.
Footnotes
Contributors: All authors fulfil the criteria for authorship; no one who fulfills the criteria for authorship has been excluded. Contributions were as follows: study planning (TG, RE and AO), study conduct (all authors) and development of the manuscript (all authors).
Funding: This work was supported by the UK Higher Education Authority Teaching Development Grant number GEN-620.
Competing interests: None declared.
Provenance and peer review: Not commissioned; externally peer reviewed.
References
- 1. Cheng A, Eppich W, Grant V, et al. Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Med Educ 2014;48:657–66. 10.1111/medu.12432 [DOI] [PubMed] [Google Scholar]
- 2. Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc 2007;2:115–25. 10.1097/SIH.0b013e3180315539 [DOI] [PubMed] [Google Scholar]
- 3. Haji FA, Hoppe DJ, Morin MP, et al. What we call what we do affects how we do it: a new nomenclature for simulation research in medical education. Adv Health Sci Educ Theory Pract 2014;19:273–80. 10.1007/s10459-013-9452-x [DOI] [PubMed] [Google Scholar]
- 4. Baker DP, Gustafson S, Beaubien J, et al. Medical teamwork and patient safety: the evidence-based relation. AHRQ publication 2005;5:1–64. [Google Scholar]
- 5. Hind M, Norman I, Cooper S, et al. Interprofessional perceptions of health care students. J Interprof Care. 2003;17:21–34. 10.1080/1356182021000044120 [DOI] [PubMed] [Google Scholar]
- 6. Thistlethwaite J, Moran M. World Health Organization Study Group on Interprofessional Education and Collaborative Practice. Learning outcomes for interprofessional education (IPE): literature review and synthesis. J Interprof Care 2010;24:503–13. 10.3109/13561820.2010.483366 [DOI] [PubMed] [Google Scholar]
- 7. Barr H, Low H. Interprofessional education in preregistration courses: a CAIPE guide for commissioners and regulators of education. Fareham: CAIPE, 2012. [Google Scholar]
- 8. Boet S, Bould MD, Layat Burn C, et al. Twelve tips for a successful interprofessional team-based high-fidelity simulation education session. Med Teach 2014;36:853–7. 10.3109/0142159X.2014.923558 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. McGaghie WC, Issenberg SB, Petrusa ER, et al. A critical review of simulation-based medical education research: 2003-2009. Med Educ 2010;44:50–63. 10.1111/j.1365-2923.2009.03547.x [DOI] [PubMed] [Google Scholar]
- 10. Dreifuerst KT. The essentials of debriefing in simulation learning: a concept analysis. Nurs Educ Perspect 2009;30:109–14. [PubMed] [Google Scholar]
- 11. Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today 2014;34:e58–63. 10.1016/j.nedt.2013.09.020 [DOI] [PubMed] [Google Scholar]
- 12. Centre for Reviews and Dissemination (CRD). Systematic reviews: CRD’s guidance for undertaking reviews in health care: Centre for Reviews and Dissemination, 2009. [Google Scholar]
- 13. Kmet LM, Lee RC, Cook LS. Standard quality assessment criteria for evaluating primary research papers from a variety of fields. Canada: Alberta Heritage Foundation for Medical Research, 2004. [Google Scholar]
- 14. Bond WF, Deitrick LM, Eberhardt M, et al. Cognitive versus technical debriefing after simulation training. Acad Emerg Med 2006;13:276–83. 10.1197/j.aem.2005.10.013 [DOI] [PubMed] [Google Scholar]
- 15. Lammers R, Byrwa M, Fales W. Root causes of errors in a simulated prehospital pediatric emergency. Acad Emerg Med 2012;19:37–47. 10.1111/j.1553-2712.2011.01252.x [DOI] [PubMed] [Google Scholar]
- 16. Freeth D, Ayida G, Berridge EJ, et al. Multidisciplinary obstetric simulated emergency scenarios (MOSES): promoting patient safety in obstetrics with teamwork-focused interprofessional simulations. J Contin Educ Health Prof 2009;29:98–104. 10.1002/chp.20018 [DOI] [PubMed] [Google Scholar]
- 17. Hull L, Russ S, Ahmed M, et al. Quality of interdisciplinary postsimulation debriefing: 360° evaluation. BMJ Simulation and Technology Enhanced Learning 2017;3:9–16. 10.1136/bmjstel-2016-000125 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Kolbe M, Weiss M, Grote G, et al. TeamGAINS: a tool for structured debriefings for simulation-based team trainings. BMJ Qual Saf 2013;22:541–53. 10.1136/bmjqs-2012-000917 [DOI] [PubMed] [Google Scholar]
- 19. Kim J-H, Kim Y-M, Park SH, et al. Focused and corrective feedback versus structured and supported debriefing in a simulation-based cardiac arrest team training. Simul Healthc 2017;12:157–64. 10.1097/SIH.0000000000000218 [DOI] [PubMed] [Google Scholar]
- 20. Boet S, Bould MD, Sharma B, et al. Within-team debriefing versus instructor-led debriefing for simulation-based education: a randomized controlled trial. Ann Surg 2013;258:53–8. 10.1097/SLA.0b013e31829659e4 [DOI] [PubMed] [Google Scholar]
- 21. Savoldelli GL, Naik VN, Park J, et al. Value of debriefing during simulated crisis management: oral versus video-assisted oral feedback. Anesthesiology 2006;105:279–85. [DOI] [PubMed] [Google Scholar]
- 22. Geis GL, Pio B, Pendergrass TL, et al. Simulation to assess the safety of new healthcare teams and new facilities. Simul Healthc 2011;6:125–33. 10.1097/SIH.0b013e31820dff30 [DOI] [PubMed] [Google Scholar]
- 23. LeFlore JL, Anderson M. Alternative educational models for interdisciplinary student teams. Simul Healthc 2009;4:135–42. 10.1097/SIH.0b013e318196f839 [DOI] [PubMed] [Google Scholar]
- 24. Forneris SG, Neal DO, Tiffany J, et al. Enhancing clinical reasoning through simulation debriefing: a multisite study. Nurs Educ Perspect 2015;36:304–10. 10.5480/15-1672 [DOI] [PubMed] [Google Scholar]
- 25. Smith-Jentsch KA, Cannon-Bowers JA, Tannenbaum SI, et al. Guided team self-correction impacts on team mental models, processes, and effectiveness. Small Group Research 2008;39:303–27. [Google Scholar]
- 26. Brett-Fleegler M, Rudolph J, Eppich W, et al. Debriefing assessment for simulation in healthcare: development and psychometric properties. Simul Healthc 2012;7:288–94. 10.1097/SIH.0b013e3182620228 [DOI] [PubMed] [Google Scholar]
- 27. Van Heukelom JN, Begaz T, Treat R. Comparison of postsimulation debriefing versus in-simulation debriefing in medical simulation. Simul Healthc 2010;5:91–7. 10.1097/SIH.0b013e3181be0d17 [DOI] [PubMed] [Google Scholar]
- 28. Reed SJ. Written debriefing: Evaluating the impact of the addition of a written component when debriefing simulations. Nurse Educ Pract 2015;15:543–8. 10.1016/j.nepr.2015.07.011 [DOI] [PubMed] [Google Scholar]
- 29. Zinns LE, Mullan PC, OʼConnell KJ, et al. An Evaluation of a New Debriefing Framework: REFLECT. Pediatr Emerg Care 2017:1. 10.1097/PEC.0000000000001111 [DOI] [PubMed] [Google Scholar]
- 30. Grant JS, Dawkins D, Molhook L, et al. Comparing the effectiveness of video-assisted oral debriefing and oral debriefing alone on behaviors by undergraduate nursing students during high-fidelity simulation. Nurse Educ Pract 2014;14:479–84. 10.1016/j.nepr.2014.05.003 [DOI] [PubMed] [Google Scholar]
- 31. Kim JH, Kim YM, Park SH, et al. Focused and corrective feedback versus structured and supported debriefing in a simulation-based cardiac arrest team training: a pilot randomized controlled study. Simul Healthc 2017;12:157–64. 10.1097/SIH.0000000000000218 [DOI] [PubMed] [Google Scholar]
- 32. Oikawa S, Berg B, Turban J, et al. Self-Debriefing vs Instructor debriefing in a pre-internship simulation curriculum: night on call. Hawaii J Med Public Health 2016;75:127–32. [PMC free article] [PubMed] [Google Scholar]
- 33. Cooper S, Cant R, Porter J, et al. Rating medical emergency teamwork performance: development of the Team Emergency Assessment Measure (TEAM). Resuscitation 2010;81:446–52. 10.1016/j.resuscitation.2009.11.027 [DOI] [PubMed] [Google Scholar]
- 34. Dreifuerst KT. Using debriefing for meaningful learning to foster development of clinical reasoning in simulation. J Nurs Educ 2012;51:326–33. 10.3928/01484834-20120409-02 [DOI] [PubMed] [Google Scholar]
- 35. Salas E, Klein C, King H, et al. Debriefing medical teams: 12 evidence-based best practices and tips. Jt Comm J Qual Patient Saf 2008;34:518–27. 10.1016/S1553-7250(08)34066-5 [DOI] [PubMed] [Google Scholar]
- 36. Rudolph JW, Simon R, Rivard P, et al. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clin 2007;25:361–76. 10.1016/j.anclin.2007.03.007 [DOI] [PubMed] [Google Scholar]
- 37. Rudolph JW, Foldy EG, Robinson T, et al. Helping without harming. The instructor’s feedback dilemma in debriefing – a case study. Simul Healthcare 2013;8:304–16. [DOI] [PubMed] [Google Scholar]
- 38. Cantrell MA. The importance of debriefing in clinical simulations. Clin Simul Nurs 2008;4:e19–23. 10.1016/j.ecns.2008.06.006 [DOI] [Google Scholar]
- 39. The INACSL Board of Directors. Standard VI: the debriefing process. Clinical Simulation in Nursing 2011:S16–17. [Google Scholar]
- 40. Walker LO, Avant KC. Strategies for theory construction in nursing. 4th edn. Upper Saddle River, NJ: Prentice Hall, 2005. [Google Scholar]
- 41. Auerbach M, Kessler D, Foltin JC. Repetitive pediatric simulation resuscitation training. Pediatr Emerg Care 2011;27:29–31. 10.1097/PEC.0b013e3182043f3b [DOI] [PubMed] [Google Scholar]
- 42. Cheng A, Goldman RD, Aish MA, et al. A simulation-based acute care curriculum for pediatric emergency medicine fellowship training programs. Pediatr Emerg Care 2010;26:475–80. 10.1097/PEC.0b013e3181e5841b [DOI] [PubMed] [Google Scholar]
- 43. Cooper JB, Singer SJ, Hayes J, et al. Design and evaluation of simulation scenarios for a program introducing patient safety, teamwork, safety leadership, and simulation to healthcare leaders and managers. Simul Healthc 2011;6:231–8. 10.1097/SIH.0b013e31821da9ec [DOI] [PubMed] [Google Scholar]
- 44. Kable AK, Arthur C, Levett-Jones T, et al. Student evaluation of simulation in undergraduate nursing programs in Australia using quality indicators. Nurs Health Sci 2013;15:235–43. 10.1111/nhs.12025 [DOI] [PubMed] [Google Scholar]
- 45. Kuiper R, Heinrich C, Matthias A, et al. Debriefing with the OPT model of clinical reasoning during high fidelity patient simulation. Int J Nurs Educ Scholarsh 2008;5:1–4. 10.2202/1548-923X.1466 [DOI] [PubMed] [Google Scholar]
- 46. Morrison AM, Catanzaro AM. High-fidelity simulation and emergency preparedness. Public Health Nurs 2010;27:164–73. 10.1111/j.1525-1446.2010.00838.x [DOI] [PubMed] [Google Scholar]
- 47. West E, Holmes J, Zidek C, et al. Intraprofessional collaboration through an unfolding case and the just culture model. J Nurs Educ 2013;52:470–4. 10.3928/01484834-20130719-04 [DOI] [PubMed] [Google Scholar]
- 48. Wetzel EA, Lang TR, Pendergrass TL, et al. Identification of latent safety threats using high-fidelity simulation-based training with multidisciplinary neonatology teams. Jt Comm J Qual Patient Saf 2013;39:AP1–3. 10.1016/S1553-7250(13)39037-0 [DOI] [PubMed] [Google Scholar]