Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Apr 10.
Published in final edited form as: Am J Surg. 2012 Jan;203(1):49–53. doi: 10.1016/j.amjsurg.2011.05.008

Research Priorities in Surgical Simulation for the 21st Century

Dimitrios Stefanidis, Sonal Arora 1, David M Parrack 2, Giselle G Hamad 3, Jeannette Capella 4, Teodor Grantcharov 5, David R Urbach 6, Daniel J Scott 7, Daniel B Jones 8; the Association for Surgical Education Simulation Committee
PMCID: PMC3322506  NIHMSID: NIHMS363857  PMID: 22172482

Abstract

Background

Despite a tremendous growth research in surgical simulation remains uncoordinated and unfocused. The objective of this study was to develop research priorities for surgical simulation.

Methods

Using a systematic methodology (Delphi), members of the Association for Surgical Education submitted 5 research questions on surgical simulation. An expert review panel categorized and collapsed the submitted questions and redistributed them to the membership to be ranked using a priority scale from 1(lowest) to 5(highest). The results were analyzed and categorized by consensus in distinct topics.

Results

Sixty members submitted 226 research questions that were reduced to 74. Ratings ranged from 2.19–4.78. Topics included simulation effectiveness and outcomes, performance assessment and credentialing, curriculum development, team training and non-technical skills, simulation center resources and personnel, simulator validation, and other. The highest ranked question was “Does simulation training lead to improved quality of patient care, patient outcomes and safety?”

Conclusions

Research priorities for surgical simulation were developed using a systematic methodology and can be used to focus surgical simulation research in areas most likely to advance the field.

Keywords: surgical simulation, research priorities, research agenda, Delphi methodology

Background

Evidence for the educational value of surgical simulators is accumulating rapidly.1, 2 Several studies have provided verification of simulator validity and have demonstrated the transfer of simulator-acquired skill to the operating room.2, 3 Nationally, surgical societies have embraced the concept of acquiring basic surgical skills outside the operating room and have developed simulator-based skills curricula for surgery residents.4 On an international level, simulation is expected to play an important role in the American College of Surgeons’ accredited Educational Institutes for the training and assessment of surgeons and residents as well as in their maintenance of certification.5

This introduction of simulators in the training paradigm of surgeons has also generated significant research; a Medline search reveals a rapidly increasing number of publications in surgical simulation ranging from 22 articles between 1980 and 1985 to almost 3,000 articles between 2006 and 2009. 6 Despite this exponential growth of simulation-based research over the past 20 years, there are still many unanswered research questions.7 Importantly, many research efforts remain uncoordinated, and a common goal and direction that can lead to the advancement of the field is unclear.

In contrast, the systematic development of research agendas has been used frequently to help guide research by various disciplines including intensive care,8 cancer research,9 and medical education.10 Within surgery, national societies such as the American Society of Colon and Rectal Surgeons and the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) have also achieved consensus to develop research agendas in their respective fields.11,12

However, research priorities for surgical simulation, although important and needed, have not yet been defined. This is particularly important given current economic constraints resulting in curtailed funding opportunities for such research. The aim of this study was therefore to create a research agenda for surgical simulation using a systematic approach.

Methods

Research Design

To create the research agenda, the modified Delphi process was used. The Delphi methodology refers to a systematic process of consulting, collecting, evaluating, and tabulating expert opinion on a specific topic without bringing the experts together.13 A particular benefit of this approach is that it can sample the opinion of a group of experts without being overwhelmed by unduly influential persons and can be controlled by appropriate feedback and modification to drive findings toward a group consensus. Achieving consensus in such a manner has been used in various health research applications, such as determining the appropriateness and necessity of procedures,14, 15 forecasting health technologies,16 and developing health research agendas.8, 9, 11, 12, 17

The principal features of a Delphi process include (a) anonymity (through the use of anonymous, self-administered questionnaires); (b) iteration (through completion of questionnaires over a series of “rounds”); (c) controlled feedback; and (d) statistical aggregation of the group response. Typically this takes place over three stages (rounds) to include the selection of the “expert” panel, the submission, assessment, and feedback of the Delphi questionnaires, and the final analysis and conclusions.

Participants

An expert review panel comprised of six members of the Association of Surgical Education (ASE) simulation committee with expertise in simulation was created and led by the first author during the project period. First, the panel developed the round 1 survey, which solicited up to five important and answerable research questions from members of the ASE and its simulation committee (experts in simulation and/or educational research) using a web-based survey (Survey Monkey™). Participants were asked to specify their target population, the intervention or comparison of interest, and the outcome to be measured in accordance with PICO guidelines for research questions.18 This survey (and all subsequent rounds) was distributed via e-mail to the ASE membership identified using the ASE listserve and was kept open for 1 month. Two e-mail reminders for participation in the survey were sent 10 and again 20 days after initial distribution.

Data Collection/Study Procedure

Figure 1 depicts the study procedure employed. Initially, the expert panel reviewed the submitted questions by the participants, removed duplicate questions, and categorized the rest into common themes by rephrasing or combining questions to better reflect the predominant theme of the suggestion. The individual questions and themes were independently assessed by all 6 members of the expert panel, and discrepancies in views were addressed until consensus was achieved. After this first round, the remaining questions which were unique, clear and concise were distributed to the original responders via a second survey (round 2). Participants were asked to rate the importance of each question for the field of surgical simulation on a 5-point Likert scale (1 = not at all important, 5 = extremely important).

Figure 1.

Figure 1

Study Algorithm

After completion of round two, means and standard deviations (SDs) of the ratings received for each question were calculated and added to the survey for round three. In the third round, participants were asked to re-rate the importance of these questions taking into consideration the round two ratings. The third round ratings were reanalyzed, and the questions were ranked in order of importance based on these ratings.

Data Analysis

Descriptive statistics (means and SD) were used to analyze the variables. These statistics were used to rank the importance of the research questions for rounds 2 and 3. To assess the level of agreement between rankings in rounds 2 and 3, the average score (mean) and distribution of the scores (SD) were compared using paired t-test. The highest ranked questions were identified to create a ‘top 10 of research questions important for surgical simulation’. In addition, the 74 questions were grouped into common categories using emergent theme analysis based on standard qualitative methodology.19 Questions within each of these categories were also rank-ordered to identify the five most important questions per category. The response rate of each survey was calculated by dividing the number of responders by the total number of members that received each survey. The denominator was different for the first survey compared to the second and third surveys as the latter two surveys were only sent to round one responders.

Results

Two hundred twenty-six research questions were submitted by 60 ASE members during round 1 of the study. Using a process of content analysis and consensus agreement, the review panel reduced these questions to 74 unique questions in surgical simulation. Round 2 ratings of these 74 questions were 3.58± 0.98 (range 2.39 – 4.65) and round 3 ratings were 3.53±0.96 (range 2.19 – 4.78) on the 5-point Likert scale. No significant difference in means was observed for most of the research questions indicating no significant change in perceived importance of research questions across the rounds.

Based on the highest means (and lowest SD), the top 10 research questions are listed in Table 1. In addition, seven common research question categories were identified. These included simulation effectiveness and outcomes, performance assessment and credentialing, skills curriculum development, team training and non-technical skills, simulator validation, resources and personnel, and other questions. The five top ranked research questions in each category are listed on Tables 2 A-G. Response rates were 40% for the round 1 survey, 92% for round 2, and 60% for round 3.

Table 1.

Top 10 research questions in surgical simulation

Round 3
Rating
Round 2
Rating
Does simulator training lead to improved patient outcomes, safety, and
quality of care?
4.8±0.4 4.6±0.7
Does training on simulators transfer to improved clinical performance? 4.5±0.8 4.4±0.8
Does documented simulator competence equal clinical competence? 4.4±0.6 4.3±0.8
What are the best methods/ metrics to assess technical and nontechnical
performance on simulators and in the OR?
4.3±0.7 4.1±0.8
What are the performance criteria that surgical trainees need to achieve to
be competent based on their training level (on a national level)?
4.3±0.7 4.3±0.8
What is the role of simulation for the certification of residents and
practicing surgeons?
4.2±0.9 4.1±0.9
How can we use simulation to teach and assess judgment and decision
making in routine and crisis situations?
4.2±0.9 4.0±0.8
What type and method of feedback is most effective to improve
performance on simulators?
4.1±1.0 4.0±0.8
How should a simulator curriculum be designed and evaluated? 4.1±1.2 4.1±1.0
How do we train and assess teams effectively using simulation? 4.1±1.0 3.9±1.0

Table 2 A.

Questions related to simulation effectiveness/outcomes

Round 3
Rating
Round 2
Rating
1. Does simulator training lead to improved patient outcomes, safety, and
quality of care?
4.8±0.4 4.6±0.7
2. Does training on simulators transfer to improved clinical performance? 4.5±0.8 4.4±0.8
3. Does documented simulator competence equal clinical competence? 4.4±0.6 4.3±0.8
4. How do we augment the transfer of simulator-acquired skill? 3.7±0.8 3.5±1.0
5. What are the differences in effectiveness and cost efficiency between
virtual reality and realistic simulators?
3.1±1.1 3.3±0.9

Table 2 G.

Other questions

Round 3
Rating
Round 2
Rating
1. What type and method of feedback is most effective to improve
 performance on simulators?
4.1±1.0 4.0±0.8
2. What is the ideal balance between simulator and clinical training? 3.6±0.9 3.8±1.0
3. What is the optimal amount and frequency of simulation training for
 each task and how do we identify when learning is complete?
3.6±0.9 3.5±1.2
4. What baseline learner characteristics/ factors enhance/ impede skill
 acquisition on simulators?
3.5±1.0 3.5±1.0
5. Does video review of own performance improve skill acquisition? 3.5±1.1 3.6±1.0

Discussion

In this study, we surveyed the membership of the ASE using a systematic methodology to identify and rank the most important questions in surgical simulation. Our results indicate that the highest ranked questions centered on patient outcomes and safety, effectiveness of simulation training inclusive of skill transfer, skills curriculum development, performance assessment and credentialing, team training and non-technical skills, resources and personnel, and simulator validation.

Similar methodologies for the development of research agendas have been used in other areas of research. The Society of American Gastrointestinal and Endoscopic Surgeons through its research committee used the same methodology to create a research agenda for minimally invasive surgery that was published in 2008.11 This agenda is currently being used by its grant reviewing committee to assess the importance and priority of grants submitted to the organization for funding. Furthermore, identifying research agendas has led to important changes in clinical practice,20 as well as the development of new clinical guidelines.21 and funding schemes. 9

The definition of priority research questions can be very valuable for researchers, industry, funding agencies, and the surgical community in general. Researchers may be able to focus their efforts on answering the most important questions first and to pursue collaborations and funding in common and relevant research areas. Additionally, editors and peer reviewers may benefit by better understanding the importance and impact of original research reports. Industry and funding organizations may benefit by the availability of a research repository that singles out the most important issues in surgical simulation by identifying the most promising, relevant, and innovative proposals that are the most likely to make important contributions to the field. Finally and perhaps most importantly, surgical patients may benefit by the faster achievement of specific goals and objectives through the concentrated efforts of researchers and funding sources.

This study has several limitations. First, data were gathered from a group of self-selected experts in surgical simulation without any external objective evidence on their expertise, making it difficult to determine the extent to which the panel represented the population. However, this problem is inherent to such research, as there are no data available describing the population of simulation experts or indeed what qualifies one as an ‘expert in simulation’. Nevertheless, the breadth of their work (in surgery, education, research, and engineering) and their years of experience yield an experienced group. In the group of panel members, there was a drop-out rate of 39%, which is well below that of most other Delphi studies. Furthermore, these priorities were determined by a group of surgical simulation experts without the inclusion of other professional groups (e.g., anaesthesia), patients, or industry representatives. Future research should take these stakeholders into account to determine if there are important differences in agendas. Finally, it is worth noting that the existing literature on identifying research priorities suggests that the focus of such priorities is not a one-off process but rather a dynamic process dependent on socioeconomic challenges, cultural contexts, and local resources. Longitudinal and cross-sectional research would determine whether the same applies to the field of surgical simulation.

Conclusion

In conclusion, a research agenda for surgical simulation was developed using a systematic methodology. This research agenda may be used by researchers and funding organizations to focus surgical simulation research in areas most likely to advance the field and by journals to appraise the relevance of scientific contributions. As such, use of this research agenda may contribute to achieving the holy grail of optimal learning and enhanced patient safety – the ultimate goal of surgical simulation.

Table 2 B.

Questions related to team training and non-technical skills

Round 3
Rating
Round 2
Rating
1. How can we use simulation to teach and assess judgment and decision
making in routine and crisis situations?
4.2±0.9 4.0±0.8
2. How do we train and assess teams effectively using simulation? 4.1±1.0 3.9±1.0
3. What are the most important non technical skills of a surgeon that
influence patient outcomes?
3.9±1.0 3.7±0.9
4. Does mental skills training enhance surgical performance? 3.5±1.0 3.3±1.0
5. What is the effect of stress on performance? 3.4±0.9 3.4±0.9

Table 2 C.

Questions related to using simulators for performance assessment and credentialing

Round 3
Rating
Round 2
Rating
1. What are the best methods/ metrics to assess technical and nontechnical
performance on simulators and in the OR?
4.3±0.7 4.1±0.8
2. What are the performance criteria that surgical trainees need to achieve
to be competent based on their training level (on a national level)?
4.3±0.7 4.3±0.8
3. What is the role of simulation for the certification of residents and
practicing surgeons?
4.2±0.9 4.1±0.9
4. What methods should be used to develop proficiency levels on
simulators?
4.1±0.7 3.7±0.9
5. Can we define the difference between competence and expertise on
simulators?
3.8±0.9 3.5±1.0

Table 2 D.

Questions related to skills curriculum

Round 3
Rating
Round 2
Rating
1. How should a simulator curriculum be designed and evaluated? 4.1±1.2 4.1±1.0
2. What is the role of simulation in a comprehensive training and
assessment curriculum for practicing surgeons learning new procedures?
3.9±0.9 3.9±0.8
3. How can simulation training (including the ACS/APDS curriculum) be
best implemented into the resident curriculum?
3.9±0.9 3.8±0.9
4. What is the retention of simulator acquired skill? 3.9±0.8 3.9±0.8
5. Which are the most important skills that should be taught using
simulators and when should they be introduced into the medical student
and resident curriculum?
3.8±0.7 3.8±1.1

Table 2 E.

Questions related to simulator/simulation validity

Round 3
Rating
Round 2
Rating
1. What is the most effective means for validating simulators? 3.6±1.0 3.7±1.0
2. What types of validity should be necessary before a simulator is used
for training and assessment?
3.4±0.9 3.7±0.9
3. Does haptic feedback improve skill acquisition on simulators? 3.2±1.0 3.3±1.1
4. What level of fidelity is more appropriate for different learners and
different tasks?
3.1±0.7 3.3±1.0
5. What are the relative advantages/ disadvantages between simulator,
animal, cadaver training models?
2.8±0.9 3.3±1.0

Table 2 F.

Questions related to resources/personnel

Round 3
Rating
Round 2
Rating
1. Is the cost of implementing simulator training offset by reduced
complications?
3.9±0.9 3.8±1.2
2. Is there a cost benefit of using simulators for training compared to
traditional teaching modalities?
3.6±1.0 4.0±1.0
3. What are the challenges and motivating factors for faculty and resident
participation in skills lab training and how can it be encouraged?
3.5±1.1 3.5±1.1
4. What are the best methods for training the simulation trainers and what
should be the requirements?
3.2±1.0 3.4±1.0
5. What is the comparative effectiveness of nonsurgical personnel vs.
surgeons in teaching and assessing residents?
2.9±1.0 3.4±1.0

Acknowledgments

Financial Support No financial support was provided for this study.

Footnotes

Conflicts of Interest Drs. Stefanidis, Arora, Parrack, Hamad, Capella, Grantcharov, Urbach, Scott and Jones have no conflicts of interest to disclose.

Data from this study were presented at the Association for Surgical Education meeting in March 2011.

Contributor Information

Sonal Arora, Imperial College London, London, UK.

David M. Parrack, Midwestern University Arizona College of Osteopathic Medicine, Glendale, AZ, USA.

Giselle G. Hamad, University of Pittsburgh Medical Center, Pittsburgh, PA, USA.

Jeannette Capella, Virginia Tech Carilion School of Medicine, Roanoke, VA, USA.

Teodor Grantcharov, University of Toronto, Toronto, ON, Canada.

David R. Urbach, University of Toronto, Toronto, ON, Canada.

Daniel J. Scott, University of Texas Southwestern, Dallas, TX, USA.

Daniel B. Jones, Beth Israel Deaconess Medical Center, Boston, MA, USA.

References

  • 1.Fried GM, Feldman LS, Vassiliou MC, et al. Proving the value of simulation in laparoscopic surgery. Ann Surg. 2004;240:518–25. doi: 10.1097/01.sla.0000136941.46529.56. discussion 525-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ahlberg G, Enochsson L, Gallagher AG, et al. Proficiency-based virtual reality training significantly reduces the error rate for residents during their first 10 laparoscopic cholecystectomies. Am J Surg. 2007;193:797–804. doi: 10.1016/j.amjsurg.2006.06.050. [DOI] [PubMed] [Google Scholar]
  • 3.Scott DJ, Bergen PC, Rege RV, et al. Laparoscopic training on bench models: better and more cost effective than operating room experience? J Am Coll Surg. 2000;191:272–83. doi: 10.1016/s1072-7515(00)00339-2. [DOI] [PubMed] [Google Scholar]
  • 4.Scott DJ, Dunnington GL. The New ACS/APDS Skills Curriculum: Moving the Learning Curve Out of the Operating Room. J Gastrointest Surg. 2008;12:213–21. doi: 10.1007/s11605-007-0357-y. [DOI] [PubMed] [Google Scholar]
  • 5.Sachdeva AK, Pellegrini CA, Johnson KA. Support for simulation-based surgical education through American College of Surgeons--accredited education institutes. World J Surg. 2008;32:196–207. doi: 10.1007/s00268-007-9306-x. [DOI] [PubMed] [Google Scholar]
  • 6.Tsuda S, Scott D,J, Jones DB. Textbook of simulation: Skills and team training. CT Cinemed Pub; Woodbury: 2011. [Google Scholar]
  • 7.Stefanidis D. Optimal acquisition and assessment of proficiency on simulators in surgery. Surg Clin North Am. 2010;90:475–89. doi: 10.1016/j.suc.2010.02.010. [DOI] [PubMed] [Google Scholar]
  • 8.Goldfrad C, Vella K, Bion JF, et al. Research priorities in critical care medicine in the UK. Intensive Care Med. 2000;26:1480–8. doi: 10.1007/s001340000628. [DOI] [PubMed] [Google Scholar]
  • 9.Robotin MC, Jones SC, Biankin AV, et al. Defining research priorities for pancreatic cancer in Australia: results of a consensus development process. Cancer Causes Control. 2010;21:729–36. doi: 10.1007/s10552-010-9501-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kilian BJ, Binder LS, Marsden J. The emergency physician and knowledge transfer: continuing medical education, continuing professional development, and self-improvement. Acad Emerg Med. 2007;14:1003–7. doi: 10.1197/j.aem.2007.07.008. [DOI] [PubMed] [Google Scholar]
  • 11.Urbach DR, Horvath KD, Baxter NN, et al. A research agenda for gastrointestinal and endoscopic surgery. Surg Endosc. 2007;21:1518–25. doi: 10.1007/s00464-006-9141-4. [DOI] [PubMed] [Google Scholar]
  • 12.Burt CG, Cima RR, Koltun WA, et al. Developing a research agenda for the American Society of Colon and Rectal Surgeons: results of a delphi approach. Dis Colon Rectum. 2009;52:898–905. doi: 10.1007/DCR.0b013e3181a0b358. [DOI] [PubMed] [Google Scholar]
  • 13.Dalkey NC. The Delphi Method: An Experimental Study of Group Opinion. Santa Monica; 1969. [Google Scholar]
  • 14.Park RE, Fink A, Brook RH, et al. Physician ratings of appropriate indications for six medical and surgical procedures. Am J Public Health. 1986;76:766–72. doi: 10.2105/ajph.76.7.766. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Seematter-Bagnoud L, Vader JP, Wietlisbach V, et al. Overuse and underuse of diagnostic upper gastrointestinal endoscopy in various clinical settings. Int J Qual Health Care. 1999;11:301–8. doi: 10.1093/intqhc/11.4.301. [DOI] [PubMed] [Google Scholar]
  • 16.Daar AS, Thorsteinsdottir H, Martin DK, et al. Top ten biotechnologies for improving health in developing countries. Nat Genet. 2002;32:229–32. doi: 10.1038/ng1002-229. [DOI] [PubMed] [Google Scholar]
  • 17.Nathens AB, Rivara FP, Jurkovich GJ, et al. Management of the injured patient: identification of research topics for systematic review using the delphi technique. J Trauma. 2003;54:595–601. doi: 10.1097/01.TA.0000028044.43091.74. [DOI] [PubMed] [Google Scholar]
  • 18.Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP J Club. 1995;123:A12–3. [PubMed] [Google Scholar]
  • 19.Mays N, Pope C. Rigour and qualitative research. BMJ. 1995;311:109–12. doi: 10.1136/bmj.311.6997.109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Burns SM, Clochesy JM, Hanneman SK, et al. Weaning from long-term mechanical ventilation. Am J Crit Care. 1995;4:4–22. [PubMed] [Google Scholar]
  • 21.Mazieres B, Bannwarth B, Dougados M, Lequesne M. EULAR recommendations for the management of knee osteoarthritis. Report of a task force of the Standing Committee for International Clinical Studies Including Therapeutic Trials. Joint Bone Spine. 2001;68:231–40. doi: 10.1016/s1297-319x(01)00271-8. [DOI] [PubMed] [Google Scholar]

RESOURCES