Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Oct 5.
Published in final edited form as: J Allied Health. 2016 Fall;45(3):191–198.

Development of an Instrument to Assess the Clinical Effectiveness of the Debriefer in Simulation Education

Jennifer L Saylor 1, Susan Flannery Wainwright 2, E Adel Herge 3, Ryan T Pohlig 4
PMCID: PMC5628394  NIHMSID: NIHMS871100  PMID: 27585615

Abstract

Simulation education continues to increase in all healthcare curriculums. Measuring how well faculty conduct debriefing sessions within the context of the learning objectives and defined pedagogy of a specific simulation is vital and deficient. The purpose of this study was to develop and test an instrument to assess a debriefer’s ability to effectively conduct a debriefing session to evaluate and demonstrate teaching effectiveness and excellence. The instrument, Peer Assessment of Debriefing Instrument (PADI), was developed using a traditional peer-review framework. Using the Delphi technique, an expert panel (n=11) completed an electronic survey and used a 4-point Likert scale to rate the PADI on clarity and understandability. In round III, a consensus >80% was achieved for both structural and content elements. Results revealed that the PADI was a valid and reliability instrument to provide a peer review of the debriefing process across healthcare disciplines. The inter-rater reliability for the average measures was very strong, with interclass correlation coefficient (ICC) = 0.973, and for the single measure was strong, ICC = 0.818. The PADI provides both novice and experienced debriefers with an objective and formative means of performing self-assessment and receiving peer feedback on a debriefing experience.


CREATING an educational curriculum that prepares future healthcare providers with essential knowledge, psychomotor skills, communication skills, and critical-thinking skills is a vital task for faculty. Increasingly, they are integrating patient simulations into the curriculum as a means to challenge learners to practice and apply cognitive, psychomotor, and affective knowledge and to use communication and critical-thinking skills that are needed for clinical practice.1

Simulation, which occurs in a safe environment without risk of harm to patients, has been shown to be an effective method for learning, practicing, or demonstrating a variety of skills needed in specific clinical environments.2 Simulation refers to complex activities in which learners integrate knowledge and skills,2 and it may include standardized patients or human patient simulators, learners, and a debriefer. A standardized patient is an individual trained to act as a real patient to emulate a set of symptoms or problems that healthcare providers encounter within the clinical setting. A human patient simulator is a mannequin that has the capacity to imitate bodily functions, such as heart and breath sounds, palpable pulses, and cardiac rhythms, for example.3 Learners may include nurses, physicians, physical therapists, occupational and speech therapists, other health professionals, or a combination of any of the above. A debriefer may be an educator or experienced clinicians who facilitates the post-simulation analysis.

The components of a patient clinical simulation include a prebriefing to prepare the learners for the simulation. Learners then engage in the simulation experience itself followed by a debriefing for post-simulation analysis. Debriefing after a simulation is an intentional process designed to provide awareness and insight, as well as to strengthen and transfer learning via an experiential learning exercise.47 Debriefing is an essential phase of simulation since this is where a great degree of the learning occurs.4,8 When done effectively, debriefing fosters clinical reasoning, critical thinking, judgment skills, and communication through a reflective learning process.4,5,9,10 Learners have the opportunity to reflect on their decision-making, critical thinking, and interprofessional communication through self-analysis and peer evaluation.

An effective debriefer is the key to successful simulation debriefing experiences. The debriefer must be knowledgeable and consistent when debriefing in order to ensure the learner’s simulation experience is effective and transferable into their clinical practice.11 When numerous debriefings are conducted, it is the responsibility of the debriefer to ensure consistency and reliability during the debriefing process.

Much of the literature and research related to standardized patients and simulation focuses on scenario development and learner evaluation rather than on the debriefer’s ability and consistency in debriefing a simulation experience.9,11,12 Researchers have identified effective strategies to facilitate active student participation in learning1315 and to assess faculty effectiveness in supporting student learning in the classroom.16,17 Debriefing after a simulation is a relatively new educational strategy in healthcare education. The skills required to be an effective debriefer differ from those required to be an effective educator.18 Educators may struggle when transitioning from instructor-centric education to learner-centric facilitation to effectively debrief learners after a patient clinical simulation.

We wanted to learn more about the factors that support learning during a debriefing and how to facilitate the development of the skills necessary to be an effective debriefer. A search of the literature revealed several measures to assess faculty effectiveness in debriefing after a patient clinical simulation. The Debriefing Assessment for Simulation in Healthcare19 (DASH) assesses the quality of the debriefing experience and rates observed behaviors relative to a well-defined behavioral domain.20 The Objective Structured Assessment of Debriefing (OSAD)21 evaluates the quality of debriefings in surgery.21 The Team GAINS structured debriefing tool is useful in simulated team-based trainings.22 While these instruments address the quality of debriefings, an instrument is still needed that can measure how well faculty conduct debriefing sessions within the context of the learning objectives and defined pedagogy of a specific simulation. We believe an instrument to assess a debriefer’s ability to effectively conduct a debriefing session could provide a data source to evaluate and demonstrate teaching effectiveness and excellence. In an effort to meet this need, we developed an instrument to assess the effectiveness of a debriefing following a patient clinical simulation. We based the instrument on current scientific literature in effective debriefing and peer-review methodology.23

In order to develop this instrument we engaged in a two-phase process. In Phase One, we used the Delphi process to develop a Peer Assessment Debriefing Instrument (PADI), a self- and peer-assessment, to assess debriefer effectiveness post-simulation. Then in Phase Two, after completing the Delphi process, we tested the instrument to establish its reliability and validity. This paper describes the development and reliability testing of the PADI and discusses how it can be used to assess debriefer effectiveness following simulation activities.

Methods

Phase 1

Instrumentation

First, we created a framework for the initial evaluative instrument based on a thorough search of contemporary literature and our own experiences in the field. We then developed the instrument using the Delphi process. The Delphi process is a research tool for collecting expert feedback through a series of structured, anonymous surveys with the goal of building consensus on the topic area. We used the Delphi process because it is considered acceptable in healthcare research and education,24 particularly when there is a lack of empirical evidence.25 It is also a cost-efficient method of generating ideas and facilitating consensus among individuals who do not meet face to face and may be geographically distant.26 It has been applied in diverse projects including program planning, needs assessment, policy determination, resource utilization, and validation of assessment tools.27,28

We aimed for an instrument that would measure how effectively faculty conducted a debriefing following a clinical simulations activity. We selected peer review as the framework due to emerging evidence that it is useful in promoting professional development of faculty.29,30 Although assessing the debriefing process from a peer-review perspective has not been fully investigated, faculty who have experienced peer evaluation of their teaching have reported it to be a positive and beneficial experience.31,32

Participants

A group of experts in debriefing and education was selected to participate in the panel to review and provide feedback on the debriefing assessment tool. Purposive sampling techniques, including the snowball technique, were used to identify a panel of content experts meeting the following criteria: authorship of literature and/or nomination from established clinical simulation center directors. Twenty simulation experts met the criteria and were invited to participate in the study. The experts were from diverse professions including nursing, occupational therapy, physical therapy, and academia-related healthcare fields.

Of the 20 invited, 11 agreed to participate. Seven (64%) of the 11 consented participants completed Round I. To attain expertise in the area of educational evaluation, we invited an additional 5 experts in academia to participate and 4 agreed. As a result, 11 (73%) completed Round II, and 9 (60%) completed Round III. Specific participation of these panelists within each of the three rounds is detailed in Table 1. This panel of experts established consensus for content validation and utility of the instrument.

TABLE 1.

Participation of Expert Panelists in the Three Phases of the Delphi Process

Participant Round I Round II Round III
1 X
2 X
3 X
4 X X X
5 X X X
6 X X X
7 X X
8 X X
9 X X
10 X X
11 X X
12 X X
13 X
14 X
15 X

Data Collection: Survey

The respondents’ survey consisted of two parts. The first part, “Pre-Assessment of the Simulation Experience,” was a self-assessment completed by the debriefer prior to the simulation experience. This assessment instrument gathered general information about the simulation and allowed the debriefer to identify any areas in which he or she wished to receive specific feedback from a peer evaluator. After each debriefer completed the self-assessment, it was subsequently given to the peer-evaluator to review.

Part two of the respondents’ survey, “Debriefing Evaluation (Self and Peer Assessment),” assessed the various elements that comprised a debriefing. These aspects were categorized as: 1) structure and organization of the debriefing, 2) verbal and non-verbal communication, 3) setting the stage and ground rules for the debriefing session, 4) talking about defusing (dealing with the emotional aspects of simulation), 5) recapping the simulation experience, 6) reflecting on action (facilitating learner’s self-reflection), 7) facilitating learner’s connection of simulation experience to clinical practice, and 8) summarizing and providing key take-away points for the learner. Under each of these areas, the instrument had 4 to 8 elements for scoring the debriefer. The evaluator would complete the instrument during the debriefing, and the debriefer would complete it after the debriefing was finished. Each of the 8 elements of the debriefing would be evaluated using a 4-point scale. A debriefer’s evaluation total score could range from 8 to 32. After both the evaluator and debriefer completed the instrument, they would compare their responses and use it to guide open discussion about positive aspects of the debriefing and areas needing improvement.

Data Collection: Delphi Process

The Delphi process was completed in three rounds between June and October 2013. Respondents were asked to rate the questions on the survey for clarity and understandability using a 4-point Likert scale and were provided ample space to suggest additions, deletions, or changes to survey elements. Communication with the respondents, including their ratings, was all done via a secure on-line survey tool, Qualtrics® (Qualtrics, LLC, https://www.qualtrics.com). Round I took longer to complete than expected because respondents’ were on summer vacations and academia experts were not in session. Reminder emails were sent approximately every 2 to 3 weeks through the Qualtrics program. (Appendix 1, available online, provides a copy of two questions on the survey.)

  • Delphi Round I: The Delphi panel experts received an invitation letter via email asking for their participation. After agreeing to participate in the study, panelists received an electronic survey that included the consent, sent via Qualtrics in June 2013.

  • Delphi Round II: This iteration of the survey included an electronic cover letter, description of purpose, and survey. It was sent to the participants approximately 2 months after the first survey in August 2013. Reminder notices were sent electronically through Qualtrics 2 and 4 weeks after the Round II survey was sent. In addition to the phrases/behaviors provided by the researchers, the words and phrases of the experts were added to the Round II survey. The respondents rated the survey items using the same Likert scale as Round I and were asked to rate the new items that were included, omitted, or had wording changes for the new elements and behavioral criteria.

  • Delphi Round III: In this final round, the survey included a cover letter, description of purpose, and the survey. It was sent to the participants approximately 4 weeks after the Round II in September 2013. Reminder notices were sent electronically through the Qualtrics program 2 and 4 weeks after the Round III survey was sent. Again, the respondents rated the survey items using the same scale as Rounds I and II and were asked to rate the new items that were included, omitted, or had wording changes for the new elements and behavioral criteria. With this final round, participants were asked to provide comments on items rated very high or very low on importance.

Over an approximately 5-month period, expert panelists participated in the Delphi process. The participants completed a web-based survey via Qualtrics during each of the three rounds. Three rounds of review and feedback by the expert panelists using the Delphi process were necessary to achieve the desired level of consensus on each element of the evaluative instrument of debriefing. Table 2 outlines the proposed Delphi process.

TABLE 2.

Delphi Process to Establish an Evaluative Instrument for Debriefing (Phase One)

Process Task
Preliminary activities Elements of faculty effectiveness in facilitating debriefing sessions were identified for inclusion in assessment instrument:
  • Review and synthesis of the literature

  • Identification of performance attributes

Round I Elements to be assessed were reviewed including behavioral criteria across levels of performance.
Inclusion/exclusion of elements and behavioral criteria were affirmed; additional elements identified were included.
Round II Summary of Round 1 was reviewed.
Based on Round 1 feedback for elements and behavioral criteria, items to be included/omitted were identified.
Round III Summary of Round II was reviewed.
Remaining issues were discussed and consensus was established.

Data Analysis

Descriptive statistics, including response rates (%), medians, means, and standard deviations, were calculated for each survey item upon completion of each round of the Delphi. An acceptable level of consensus was determined to be 80% agreement between the panel experts. Items that did not reach an 80% consensus were omitted. Across the three phases of the Delphi process, content analysis of open-ended responses was completed and used to refine all components of the evaluative instrument of debriefing. After completion of each round of the Delphi process, the instrument’s inter-rate reliability was evaluated using interclass correlation coefficients (ICC).33

Phase 2

In Phase 2, we conducted testing to determine the reliability and validity of the PADI.

Participants

We identified 5 experts to participate in this phase of instrument testing who were not from the original group of Delphi experts. Selection criteria included extensive simulation and debriefing experience of >5 years, as well as identified within their respective settings as expert debriefing practitioners. These experts were employed by organizations that are co-founding members of the Delaware Health Science Alliance. They represented the following areas of clinical expertise: emergency nursing, nursing education, radiation oncology, neonatology, and medical education.

Data Collection

In preparation for establishing inter-rater reliability among these experts, the researchers developed three debriefing vignettes to illustrate different performance levels of a debriefer’s proficiency. These vignettes were recorded at a large academic health center where an experienced faculty member conducted the debriefings. Each debriefing session was related to the same simulation, but the competency of the debriefer changed to demonstrate a spectrum of proficiency in performing a debriefing. Using the instrument, the researchers viewed the videos and reached a consensus on rating the debriefer.

In January 2014, the five experts received a half-day education session to learn how to use the instrument. After obtaining consent, researchers described phase 1 of the study and phase 2 procedures. The preassessment form was presented first followed by the post evaluation form.

To demonstrate the phase 2 process, each expert received one completed preassessment form and three post-evaluation forms, one for each of debriefing vignette. Researchers reviewed the pre- and post-forms of the instrument. The preassessment form, which described the learner’s and debriefer’s level of experience, learning objectives for the simulation, and areas the debriefer would like to improve upon. The post-evaluation form consisted of the eight areas to be evaluated during the debriefing process using a 4-point Likert type scale with percentages assigned to each level. Once the experts developed a verbal understanding of how to evaluate a debriefer using the developed instrument, they reviewed the recorded debriefing sessions, each of which lasted approximately 8 to 10 minutes. After each debriefing session, experts promptly completed the post-evaluation form independently. To simulate a live debriefing session, the experts reviewed each video without discussion between videos. After completing evaluations for all three videos, the researchers reviewed each video with the panelists and facilitated discussion of what each vignette was intend to demonstrate. Finally, the experts provided initial feedback on the tool.

Results

Feedback from Delphi panelists was evaluated after each round and the instrument was updated to reflect the panel’s suggestions. The results of each Delphi round with itemized responses for each round are detailed in Table 3.

TABLE 3.

Itemized Responses of the Delphi Results for Each Round of the Survey

Round I Round II Round III
1. Verbal and non-verbal communication
Avoids judgment of learner performance 3.86 (.38) 3.64 (.92) Removed
Reflects open-ended questions back to the group 3.86 (.38) 3.91 (.30) 3.88 (.35)
Effectively uses silence 3.86 (.38) 3.64 (.67) 3.88 (.35)
Sensitive to learners’ non-verbal behaviors.
Changed to: Attends to learners’ non-verbal behaviors (e.g., tone, facial expression, body language) 3.57 (.79) 3.73 (.47) 3.88 (.35)
Displays appropriate and professional non-verbal behavior (e.g., tone, facial expression, body language) 4.00 (.00)
2. Setting the stage and ground rules for the debriefing session
Articulates goals clearly 4.00 (.00) 3.73 (.65) 3.67 (.71)
Sets the grounds rules (e.g., confidentiality, respect for others, freedom to ask questions) 4.00 (.00) 4.00 (.00) 4.00 (.00)
Reiterates the use of simulation for learning 3.43 (.79) 3.27 (1.01) 3.22 (1.3)
Answers learner questions before beginning 4.00 (.00) 3.55 (.93) 3.22 (1.3)
Explains debriefing format 3.86 (.38) 3.82 (.40) 3.78 (.44)
Creates a safe learning environment 4.00 (.00)
Clarifies roles and expectations for the debriefing 3.89 (.33)
3. Talking about defusing (releasing emotion/de-roling)
Avoids judgment statements regarding learner performance 3.86 (.38) 3.64 (.92) 3.89 (.33)
Makes sure all learners have an opportunity to discuss emotions
Changed to: Engages all learners in discussion of emotions and feelings regarding events 4.00 (.00) 3.64 (.67) 3.44 (1.13)
Validates emotions as real and acceptable 4.00 (.00) 4.00 (.00) 4.00 (.00)
Places debriefing topics from learners in “parking lot” until transition made
Changed to: Appropriately tables untimely topics and returns to them for a later discussion 3.57 (.53) 3.64 (.92) 3.56 (1.01)
4. Recapping the simulation experience
Recaps events factually and in an orderly manner 4.00 (.00) 3.55 (.93) 3.56 (1.03)
Identifies points for discussion 4.00 (.00) 3.82 (.60) 3.89 (.33)
Uses simulation objectives or developed evaluation tool as discussion guide
Changed to: Uses simulation objectives and/or learner assessment rubric as a discussion guide 3.86 (.38) 3.45 (.93) 3.33 (1.12)
Facilitates learner reflection to recap events 4.00 (.00) 4.00 (.00) 4.00 (.00)
Allows peer feedback to clarify events that happened in the simulation experience (obtains member agreement) 3.78 (.67)
5. Reflecting on action (analysis)
Uses innovative techniques and prompts (video, learner questions, SP feedback) to facilitate learner’s reflection on reasons for their behavior 3.86 (.38) 3.45 (.69) 2.89 (1.17)
Assists learners with assimilating new with previous knowledge 4.00 (.00) 4.00 (.00) 3.78 (.44)
Promotes peer-to-peer interaction and feedback 3.86 (.38) 3.73 (.65) 3.56 (.88)
Focuses on key elements related to learning objectives 3.86 (.38) 3.73 (.90) 3.67 (1.00)
Enables learners to self-identify strengths and areas for growth 3.86 (.38) 3.45 (.69) 4.00 (.00)
6. Facilitating learner’s connection of knowledge to practice
Conduct learner-centric “needs assessment” 3.86 (.38) 3.27 (1.01) Removed
Integrates simulation experience with clinical practice 4.00 (.00) 4.00 (.00) 3.89 (.33)
Encourages learners to reflect upon application of new information for future clinical practice 4.00 (.00) 3.91 (.30) 4.00 (.00)
Reviews all items that arose during discussion to ensure all questions/concerns of learners are addressed during debriefing process
Changed to: Reviews key items identified during discussion to ensure questions/concerns of learners are addressed during debriefing process 3.86 (.38) 3.55 (.69) 3.78 (.67)
Encourages learners to integrate prior knowledge and apply new information gained in simulation experience 4.00 (.00)
7. Summarizing: providing key take away points for the learner
Encourages learner to restate the learning objectives and the lessons learned 3.43 (.79) 3.18 (1.08) 3.33 (1.00)
Ensures all questions/concerns of learners are addressed during the debriefing process 3.71 (.49) 3.73 (.65) 3.78 (.67)
Provides key take away points for the learner 4.00 (.00) 3.60 (.70) Removed
Provides information for any further simulation reflection assignments 3.57 (.79) 3.55 (.52) 3.11 (1.17)
Offers learners remediation/additional experience if appropriate 4.00 (.00) 3.70 (.95) 3.56 (1.01)
Encourages ongoing self-reflection of learner’s individual performance 3.82 (.60) 3.89 (.33)
8. Time management before, during, and after simulation
Appropriately sets up the debriefing environment before the simulation 4.00 (.00) 3.73 (.65) 3.33 (1.00)
Starts simulation on time
Changed to: Adheres to the schedule for debriefing or adjusts the schedule as appropriate with group buy-in 3.86 (.38) 3.18 (.87) 3.00 (1.00)
Appropriately stops simulation 3.86 (.38) Removed Removed
Allows enough time for de-roling
Changed to: Allows enough time for dealing with the emotional aspects of the simulation 3.86 (.38) 3.45 (.69) 3.67 (1.00)
Allows enough time for recap 3.86 (.38) 3.64 (.50) 3.56 (1.00)
Allows enough time for analysis 4.00 (.00) 3.82 (.40) 3.67 (1.00)
Allows enough time for learners to connect knowledge to practice 3.86 (.38) 3.91 (.30) 3.67 (1.00)
Allows enough time for summary 4.00 (.00) 3.27 (1.01) 3.33 (1.00)
Finishes any evaluative paperwork and put it in correct place
Changed to: Finishes any evaluative paperwork and forwards to appropriate parties 3.57 (.79) 3.00 (1.10) 3.11 (1.05)

Data given as mean (SD). Participants were asked to rate each objective under each of the eight headings on a scale 1–4 from not important to important. Italicized wording shows the changes that were made to the initial instrument via the Delphi process and represent the final instrument.

Feedback on Round I Delphi Panel

Seven participants responded to the first round and provided input about both structure and content of the assessment tool. Upon completion of Round 1, changes were made to the assessment tool with respect to structure of the tool and language within the questions and response scales. Specifically, the following edits were made to standardize language in the instrument to debriefer rather than the facilitator, language and use of terms was clarified, and the addition of exemplar behaviors in post-simulation questions was completed, as well as additional explanation on how the post-simulation tool would be used. Two changes were made to the response scales: 1) the definitions of the “not familiar” to “very familiar” scale were revised, and 2) the “instructor-centric” through “learner-centric” spectrum was decreased from four to three options and the definitions for the new categories were revised. Finally, the items that the debriefer wanted feedback on with respect to the debriefing were revised in both the pre-simulation checklist and the post-debriefing assessment section of the assessment tool.

Feedback on Round II Delphi Panel

In this round, 11 participants responded to questions about the content and structure of the assessment tool. Three of these participants had provided feedback in Round 1. Evaluation of panel feedback revealed that additional clarification to language was required in a number of areas. The revisions included: final revision of the “not familiar” to “very familiar” response scale, operational definitions for high-fidelity and low-fidelity simulations were added, directive language was added in the instructions to the debriefer and the evaluator, redundant sample behaviors were removed from the post-simulation component of the assessment tool, and objective criteria in the form of percentiles were added to the “instructor-centric” through “learner-centric” response spectrum.

Feedback on Round II Delphi Panel

In this last round, consensus at >80% was achieved for both structural and content elements of the assessment tool. Nine participants completed this round; eight of the nine participants had completed Round II as well. As we achieved 80% agreement that was established a priori, we determined that the assessment tool was ready for pilot testing in the educational setting. This pilot testing will be described in a future publication.

Phase 2

In Phase 2, the peer-debriefing instrument was tested to establish its reliability and validity. The inter-rater reliability for the average measures was very strong, ICC = 0.973, and for the single measure was strong, ICC = 0.818.

Discussion

This paper describes the rigorous Delphi process used for the development of a self- and peer-review debriefing assessment of debriefer effectiveness. Through the process, we developed and tested the reliability and validity of the Peer Assessment of Debriefing Instrument (PADI) to guide novice, experienced, and expert debriefers in the debriefing process using a traditional peer-review framework. The PADI expands on currently available tools by grounding the evaluation of the debriefer in evidence-based peer-assessment practices. Specifically, use of the PADI takes an individual through a process of:

  1. Performing self-assessment of not only the intended objectives of the debriefing, but also the elements of debriefing that they would like to explicitly receive feedback on.

  2. Participate in the process of self and peer assessment that includes observation by a reviewer, followed by a conversation by the debriefer and reviewer of these observed activities. This allows the reviewer to serve as a consultant to the benefit of the patient’s professional development.

Faculty can use triangulation of their intended performance and outcomes with the post-debriefing self-and peer-assessment to demonstrate effectiveness and/or excellence. Eliciting feedback from the student’s perspective on the debriefer’s attainment of the intended objectives can also provide an additional source of valuable data. This process allows faculty to demonstrate ongoing quality improvement, regardless of their level of experience, as a component of faculty development and evaluation of their teaching.

The PADI is a relatively short scale, organized into eight subsections that correspond to aspects of the debriefing process that were identified in the current literature and though the Delphi process. While debriefing can be subjective, the items of the instrument are scored on a Likert-type scale, with each score pertaining to a percentage completed within each subsections:

  1. structure and organization of the debriefing,

  2. verbal and non-verbal communication,

  3. setting the stage and ground rules for the debriefing session,

  4. talking about defusing (dealing with the emotional aspects of simulation,

  5. recapping the simulation experience,

  6. reflecting on action (facilitating learner’s self-reflection),

  7. facilitating learner’s connection of simulation experience to clinical practice, and

  8. summarizing: providing key take away points for the learner.

The PADI has some noteworthy advantages. Results of this project suggest that it has excellent inter-rater reliability and is a valid instrument to provide a peer-review of the debriefing process across healthcare disciplines. The potential utility of this debriefing peer-review instrument is substantial. It is projected that learning how to rate a debriefer using this instrument can be done in a relatively short time, approximately 1 to 2 hours. The use of the PADI in a peer-review process takes a relatively short amount of time, approximately 20 to 30 minutes for the Pre-Assessment of the Simulation Experience and approximately 60 minutes for the evaluator and debriefer to review the Debriefing Evaluation (Self and Peer Assessment) following the observed activities.

The PADI may be useful in debriefings for single discipline and interprofessional simulation experiences in both prelicensure and clinical settings. The tool is innovative in that it uses peer-review methodology to enhance and support faculty development. The PADI can be used as part of a comprehensive faculty review of teaching and development process. We have conducted pilot testing of the PADI to identify its utility in a variety of settings and plan to publish the results of this pilot testing in a future article.

Conclusion

Contemporary research on clinical simulations has recognized the importance of debriefing and the value of the debriefing in facilitating learning after a simulation experience. The complexity and subjectivity of the debriefing process and the need to educate and assess debriefers were driving forces in blending evidence-based peer-assessment practices with assessment of the debriefer. Use of this instrument provides both novice and experienced debriefers with an objective and formative means of performing self-assessment and receiving peer-feedback on a debriefing experience.

Acknowledgments

Funding by the Delaware Health Sciences Alliance.

The authors acknowledge the students who helped us with this study; Pamela Keats, MS, OTR/L, Catherine Misczak, OTS, and Daniela Mead from Thomas Jefferson University; and Alissa Elliott, BSN, and Lindsey Hertsenberg, BSN, from the University of Delaware.

Footnotes

The authors declare no conflicts of interest.

Appendix 1 for this paper is available at www.ingentaconnect.com/content/asahp/jah, see vol. 45, no. 3, Fall 2016 issue.

Contributor Information

Dr. Jennifer L. Saylor, Assistant Professor, School of Nursing, University of Delaware, Newark, DE.

Dr. Susan Flannery Wainwright, Associate Professor and Chair, Department of Physical Therapy, Jefferson School of Health Professions, Thomas Jefferson University, Philadelphia, PA.

Dr. E. Adel Herge, Associate Professor, Director of Combined BSMSOT Program, and Director of Health Professions Simulation, Thomas Jefferson University, Philadelphia, PA.

Dr. Ryan T. Pohlig, Biostatistician, Biostatistics Core Facility, University of Delaware, Newark DE.

References

  • 1.Galloway S. Simulation techniques to bridge the gap between novice and competent healthcare professionals. Online J Iss Nurs. 2009;14(2):3. [Google Scholar]
  • 2.Herge EA, Lorch A, DeAngelis T, et al. The standardized patient encounter. J Allied Health. 2013;42(4):229–235. [PubMed] [Google Scholar]
  • 3.Rhodes M, Curran C. Use of the human patient simulator to teach clinical judgment skills in a baccalaureate nursing program. Comput Inform Nurs. 2005;23(5):256–262. doi: 10.1097/00024665-200509000-00009. [DOI] [PubMed] [Google Scholar]
  • 4.Arafeh JMR, Hansen SS, Nichols A. Debriefing in simulated-based learning: facilitating a reflective discussion. J Perinat Neonatal Nurs. 2010;24(4):302–309. doi: 10.1097/JPN.0b013e3181f6b5ec. [DOI] [PubMed] [Google Scholar]
  • 5.Miller KK, Riley W, Davis S, Hansen HE. In situ simulation: a method of experiential learning to promote safety and team behavior. J Perinat Neonat Nurs. 2008;22(2):105–113. doi: 10.1097/01.JPN.0000319096.97790.f7. [DOI] [PubMed] [Google Scholar]
  • 6.Morgan PJ, Tarshis J, LeBlanc V, et al. Efficacy of high-fidelity simulation debriefing on the performance of practicing anesthetists in simulated scenarios. Br J Anaesth. 2009;103(4):531–537. doi: 10.1093/bja/aep222. [DOI] [PubMed] [Google Scholar]
  • 7.Warrick D, Hunsaken PL, Cook CW, Altman S. Debriefing experiential learning exercises. J Exp Learn Simul. 1979;1:91–100. [Google Scholar]
  • 8.Zigmont JJ, Kappus LJ, Sudikoff SN. The 3D model of debriefing: defusing, discovering, and deepening. Semin Perinatol. 2011;35:52–58. doi: 10.1053/j.semperi.2011.01.003. [DOI] [PubMed] [Google Scholar]
  • 9.Dreifuerst K. The essentials of debriefing in simulation learning: a concept analysis. Nurs Educ Prospect. 2009;30(2):109–114. [PubMed] [Google Scholar]
  • 10.INASCL Board of Directors. Standard VI: the debriefing process. Clin Simul Nurs. 2011;7(4 suppl):S16–S17. [Google Scholar]
  • 11.Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Health. 2007;2(2):115–125. doi: 10.1097/SIH.0b013e3180315539. [DOI] [PubMed] [Google Scholar]
  • 12.Parker B, Myrick F. Transformative learning as a context for the human patient simulation. J Nurs Educ. 2010;49(6):326–332. doi: 10.3928/01484834-20100224-02. [DOI] [PubMed] [Google Scholar]
  • 13.Fatmi M, Hartling L, Hillier T, et al. The effectiveness of team-based learning on learning outcomes in health profession education: BEME guide no. 30. Med Teach. 2013;35(12):e1608–24. doi: 10.3109/0142159X.2013.849802. [DOI] [PubMed] [Google Scholar]
  • 14.McLaughlin JE, Roth MT, Glatt DM, et al. The flipped classroom: a course redesign to foster learning and engagement in a health professions school. Acad Med. 2014;89(2):236–243. doi: 10.1097/ACM.0000000000000086. [DOI] [PubMed] [Google Scholar]
  • 15.Wood EJ. Problem-based learning: exploiting knowledge of how people learn to promote effective learning. Biosci Educ. 2004;(3) article 5. [Google Scholar]
  • 16.Baldwin C, Chandran L, Gusic M. Guidelines for evaluating the education performance of medical school faculty: Priming a national conversation. Teach Learn Med. 2011;23(3):285–297. doi: 10.1080/10401334.2011.586936. [DOI] [PubMed] [Google Scholar]
  • 17.Steinert Y, Mann K, Centeno A, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach. 2006;28(6):497–526. doi: 10.1080/01421590600902976. [DOI] [PubMed] [Google Scholar]
  • 18.Dismukes RK, Gaba DM, Howard SK. So many roads: facilitated debriefing in healthcare. Simul Healthc. 2006;1(1):23–25. doi: 10.1097/01266021-200600110-00001. [DOI] [PubMed] [Google Scholar]
  • 19.Simon R, Rudolph JW, Raemer DB. Debriefing assessment for simulation in healthcare. Cambridge, MA: 2009. Available from: https://harvardmedsim.org/debriefing-assesment-simulation-healthcare.php. [Google Scholar]
  • 20.Brett-Fleeger M, Rudolph J, Eppich W, et al. Debriefing assessment for simulation in healthcare. Simul Healthc. 2012;7(5):288–294. doi: 10.1097/SIH.0b013e3182620228. [DOI] [PubMed] [Google Scholar]
  • 21.Arora S, Ahmed M, Paige J, et al. Objective structured assessment of debriefing. Ann Surg. 2012;256:982–988. doi: 10.1097/SLA.0b013e3182610c91. [DOI] [PubMed] [Google Scholar]
  • 22.Kolbe M, Weiss M, Grote G, et al. TeamGAINS: a tool for structured debriefings for simulation-based team trainings. BMJ Qual Saf. 2013;22:541–553. doi: 10.1136/bmjqs-2012-000917. [DOI] [PubMed] [Google Scholar]
  • 23.Paulsen MB. Evaluating teaching performance. N Dir Inst Res. 2002;114:15–18. [Google Scholar]
  • 24.Vernon W. The Delphi process: a review. Int J Ther Rehabil. 2009;16(2):69–75. [Google Scholar]
  • 25.Powell C. The Delphi process: myths and realities. J Adv Nurs. 2003;41(4):376–382. doi: 10.1046/j.1365-2648.2003.02537.x. [DOI] [PubMed] [Google Scholar]
  • 26.Polit DF, Beck CT. Nursing Research: Generating and Assessment Evidence for Nursing Practice. Philadelphia, PA: Wolters Kluwer Health/LWW; 2008. [Google Scholar]
  • 27.Stefanovich A, Williams C, McKee P, et al. Development and validation of tools for evaluation of orthosis fabrication. Am J Occup Ther. 2012;66:739–746. doi: 10.5014/ajot.2012.005553. [DOI] [PubMed] [Google Scholar]
  • 28.McGinnis PQ, Wainwright SF, Hack L, et al. Use of a Delphi process to establish consensus for recommended uses of selected balance assessment approaches. Physiother Theory Pract. 2010;26(3):1–16. doi: 10.3109/09593980903219050. [DOI] [PubMed] [Google Scholar]
  • 29.Blauvelt MJ, Erickson CL, Davenport NC, Spath ML. Say yes to peer review. Nurse Educ. 2012;37(3):126–130. doi: 10.1097/NNE.0b013e318250419f. [DOI] [PubMed] [Google Scholar]
  • 30.Gusic M, Hageman H, Zenni E. Peer review: a tool to enhance clinical teaching. Clin Teach. 2013;10:287–290. doi: 10.1111/tct.12039. [DOI] [PubMed] [Google Scholar]
  • 31.DiVall M, Barr J, Gonyeau M, et al. Follow-up assessment of a faculty peer observation and evaluation program. Am J Pharm Educ. 2012;76(4):61. doi: 10.5688/ajpe76461. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Wellein MG, Ragucci KR, Lapointe M. A peer review process for classroom teaching. Am J Pharm Educ. 2009;73(5):79. doi: 10.5688/aj730579. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86:420–428. doi: 10.1037//0033-2909.86.2.420. [DOI] [PubMed] [Google Scholar]

RESOURCES