Skip to main content
Current Urology logoLink to Current Urology
. 2024 Jun 21;18(2):133–138. doi: 10.1097/CU9.0000000000000192

Creation and validation of a novel low-cost dry lab for early resident training and assessment of robotic prostatectomy technical proficiency

Kevin Kunitsky a, Abhishek Venkataramana a, Katherine E Fero a, Jorge Ballon a, Jacob Komberg b, Robert Reiter a,b, Wayne Brisbane a,
PMCID: PMC11337995  PMID: 39176295

Abstract

Purpose

To evaluate the preliminary validity and acceptability of a low-cost low-fidelity robotic surgery dry lab for training and assessing residents’ technical proficiency with key robotic radical prostatectomy steps.

Materials and methods

Three standardized inanimate tasks were created to simulate the radical prostatectomy steps of posterior dissection, neurovascular bundle release, and urethrovesical anastomosis. Urology trainees and faculty at a single institution completed and evaluated each dry lab task. Construct validity was evaluated by comparing task completion times and Global Evaluative Assessment of Robotic Skills scores across four participant cohorts: medical students (n = 5), junior residents (n = 5), senior residents (n = 5), and attending surgeons (n = 7). Content validity, face validity, and acceptability were evaluated through a posttask survey using a 5-point Likert scale.

Results

There was a significant difference in the individual and composite task completion times and Global Evaluative Assessment of Robotic Skills scores across all participant cohorts (all p < 0.01). The model was rated favorably in terms of its content validity and acceptability for use in residency training. However, model realism, compared with human tissue, was poorly rated. The dry lab production cost was less than US $25.

Conclusions

This low-cost procedure-specific dry lab demonstrated evidence of content validity, construct validity, and acceptability for simulating key robotic prostatectomy technical steps and can be used to augment robot-assisted laparoscopic prostatectomy surgical training.

Keywords: Robotic surgery education, Radical prostatectomy, Simulation, Dry laboratory, Validation

1. Introduction

Robot-assisted laparoscopic prostatectomy (RALP) is a technically demanding procedure that can be difficult for novice trainees to perform. Previous work has demonstrated a requisite 150–250 cases to achieve acceptable patient outcomes and operator comfort with RALP.[1] Simulation has the potential to steepen this learning curve and reduce patient morbidity associated with the initial phases of learning as proficiency is approached.[2] Several RALP simulation modalities are currently available and widely range in validity and efficacy.[35] These include 3-dimensionally printed ex vivo trainers, virtual and artificial reality platforms, and animal or cadaver “wet lab” models. While high in fidelity, 3-dimensionally printed, and wet lab training modalities are generally expensive, time consuming, and resource-intensive.[4] Accordingly, they have yet to be widely adopted for resident RALP surgical training.[6,7]

Structured inanimate task-based (“dry lab”) training is increasingly recognized as a valid and effective tool in robotic surgery education and has the potential to augment traditional resident RALP surgical training.[4,5] Dry labs offer the advantages of being more affordable, time efficient, and accessible than other training modalities. Despite their low fidelity, dry labs may be a feasible and cost-effective option for RALP surgical training. To date, current efforts have focused on creating and validating dry labs for the teaching of generic robotic surgery skills, such as object transfer and basic suturing.[4,5] The utility of dry labs for simulating and assessing proficiency in more advanced surgical skills, such as specific RALP surgical steps, requires further investigation. Aghazadeh et al.[8] recently found a positive correlation between generic dry lab performance and intraoperative trainee performance of endopelvic dissection during RALP. However, no procedure-specific dry labs have been developed or evaluated to improve RALP-specific technical ability.[4,5]

This study aimed to design and evaluate a novel low-cost, low-fidelity dry lab model for training and assessing proficiency in the following three and technically demanding RALP steps: posterior dissection, neurovascular bundle release, and urethrovesical anastomosis. Our primary outcome was the assessment of model validity (construct, content, and face validity), and the secondary outcome was acceptability of the model. We hypothesized that a simple low-cost dry lab model can reasonably simulate specific RALP steps and provide utility in facilitating the standardized assessment of resident RALP technical proficiency.

2. Materials and methods

2.1. Robot-assisted laparoscopic prostatectomy dry lab model

The RALP dry lab simulation model (Fig. 1) consists of 3 standardized inanimate tasks—“tape peel,” “cut and peel,” and “tube anastomosis”—which are designed to develop the technical proficiency and required skillset of 3 specific RALP steps, respectively: posterior dissection, neurovascular bundle release, and urethrovesical anastomosis. Task 1 (tape peel) required peeling a single layer of tape from a mobile base layer. This tests the ability to separate tissue planes while using the third arm for stability as in posterior dissection (Fig. 1B). Task 2 (cut and peel) involved creating a plane between the tape tube and the overlying affixed piece of tape (Fig. 1C). Once the plane is created, the user cuts along a straight line to simulate a high release of the neurovascular bundle. Task 3 (tube anastomosis) required throwing a single interrupted suture between 2 adjacent tape tubes to test robotic suturing, as in urethrovesical anastomosis (Fig. 1D). All tasks had a 5-minute time limit for the purposes of this study.

Figure 1.

Figure 1

Robot-assisted laparoscopic prostatectomy dry lab models. (A) 3M Microfoam tape material used to construct dry lab models. (B) Posterior dissection (tape peel) model. (C) Neurovascular bundle release (cut and peel) model. (D) Urethrovesical anastomosis (tube anastomosis) model.

These RALP steps were chosen for the simulation because of their technical complexity and relative difficulty for early trainees to master. Dry lab models were constructed using scissors, expired sutures, and 3M Microfoam surgical tape, which was chosen because of its low cost, tissue-like properties, and ease of access within a hospital setting. Furthermore, this material has been used in prior dry labs to simulate various human tissues.[911] In total, the 3 models cost less than US $25 to produce and required approximately 5 minutes to construct. Detailed video instructions on how to reproduce and use the models are available online (Supplementary Video, http://links.lww.com/CURRUROL/A34).

2.2. Study subjects

Attending surgeons, residents, and medical students from a single institution were recruited to complete and evaluate the dry lab model. All were from the urology department except for 2 nonurology residents (general surgery and obstetrics and gynecology). Participants completed a demographics form that included prior robotic surgery experience, defined as the self-reported number of robotic surgery cases as either primary surgeon or with active trainee participation at the console. In addition, participants self-reported independent practice, defined as the time spent on the robot outside of living patient cases. Those who had not previously used the Da Vinci robot were required to watch a brief instructional video describing the robotic console controls. Before task completion, all participants viewed standardized video instructions detailing the task objectives for each model. Participants then performed dry lab tasks using the Da Vinci SI robot available at our institution’s simulation center.

2.3. Primary and secondary outcomes

The primary outcome was the assessment of model validity. We evaluated construct validity (ie, the model can differentiate expert from novice), content validity (ie, the model recreates the surgical task), and face validity (ie, the realism of the model compared with actual surgical task), as previously defined by Gallagher et al.[12] Construct validity was evaluated based on individual and composite (sum of 3 tasks) task completion time and Global Evaluative Assessment of Robotic Skills (GEARS)[13] scores across the following 4 participant cohorts based on level of training: medical students, junior residents (defined as postgraduate years 1–4), senior residents (defined as postgraduate years 5–6), and attending surgeons. The GEARS is a validated tool for assessing robotic surgery proficiency and evaluates the depth perception, bimanual dexterity, efficiency, force sensitivity, autonomy, and robotic control of the surgeon.[13] Autonomy was not assessed in this dry lab, and thus scores were held to a maximum of 25 points. Robotic console footage of participants completing the dry lab tasks was captured and used for blinded GEARS assessment, which was performed externally by 3 reviewers (AV, KK, and JB). Content validity and face validity were assessed exclusively by attending surgeons through an online post–dry lab survey using a 5-point Likert scale (Fig. 2). The secondary outcome of the acceptability of the model for incorporation into residency training was assessed both by residents and attending surgeons (Fig. 2). A score of 4 of 5 was considered as the threshold of adequacy for demonstrating validity, consistent with previous validation literature.[14]

Figure 2.

Figure 2

Dry lab content validity, face validity, and acceptability ratings. Content and face validity were assessed only by attendings and indicated by # (n = 7). Acceptability was assessed by both residents and attending surgeons (n = 17). The last 3 items* were asked only of residents (n = 10).

2.4. Analysis

The sample size was determined based on convenience sampling, with attention paid to ensuring equal numbers of participants in our cohort. Kruskal-Wallis and Wilcoxon rank sum tests were used to assess differences in nonparametric continuous variables (task completion time and GEARS scores) across all 4 participant cohorts and between cohort pairs, respectively. All statistical analyses were performed using the STATA software version 16.1 (StataCorp, College Station, TX, USA). Statistical significance was set at p < 0.05. This study was evaluated by the research ethics board of UCLA and was exempt from a formal IRB review (IRB#20-001803).

3. Results

3.1. Demographics

A total of 22 participants completed the dry lab tasks. Table 1 outlines the cohort demographics and baseline experience with robotic surgery. Participants were stratified by training level into the following 4 cohorts: medical students (n = 5), junior residents (n = 5), senior residents (n = 5), and attending surgeons (n = 7). Across the 4 cohorts, there were significant differences in baseline robotic surgery case experience (p < 0.01). There was no significant difference in robotic case experience between junior residents and medical students (p = 1.00); however, junior residents had greater total independent practice experience: median 5 hours (interquartile range [IQR], 3–7 hours) versus 0 hours (IQR, 0–0 hour), respectively (p = 0.05).

Table 1.

Participant robotic surgery experience and demographics before dry lab participation.

Medical students Junior residents Senior residents Attending surgeons p
No. participants 5 5 5 7
No. female (%) 0 (0) 3 (60) 1 (20) 0 (0) 0.04
No. specialty
Urology (%) N/A 3 (60) 5 (100) 7 (100) 0.25
General surgery (%) N/A 1 (20) 0 (0) 0 (0)
OBGYN (%) N/A 1 (20) 0 (0) 0 (0)
Median no. cases with active participation at console (IQR) 0 (0–0) 0 (0–0) 40 (30–40) 450 (350–875) <0.01
Median no. hours of independent robotic surgery practice (IQR) 0 (0–0) 5 (3–7) 15 (10–20) N/A <0.01

Independent practice hours were defined as the number of hours spent practicing on the Da Vinci console outside of live surgery.

IQR = interquartile range; N/A = not available; OBGYN = obstetrics and gynecology.

3.2. Construct validity

Attending surgeons generally had the fastest task completion time and highest GEARS scores, followed by senior residents (Table 2). There was a significant difference in the individual and composite task time and GEARS scores across all cohorts (Table 2; all p < 0.01). Differences in performance also were assessed between cohorts (Fig. 3), with significant differences in composite GEARS score and completion time, respectively, between the following cohorts: attending surgeons and senior residents (p < 0.01 and p = 0.05), attending surgeons and junior residents (both p < 0.01), attending surgeons and medical students (both p < 0.01), senior residents and junior residents (both p = 0.02), and senior residents and medical students (p = 0.02 and p < 0.01). There were no significant differences in composite dry lab GEARS score and task completion time between junior residents and medical students (p = 0.31 and p = 0.42, respectively).

Table 2.

Listed task GEARS scores and completion time by expertise level

GEARS score Completion time, s
Attending physicians Senior residents Junior residents Medical students p Attending physicians Senior residents Junior residents Medical students p
Tape peel 24.7 (24.2–24.8) 20 (19.0–20.7) 18.3 (17.7–18.7) 13.0 (12.3–20.0) <0.01 66.0 (46.5–67.5) 85.0 (65.0–99.) 137.0 (109.0–181.0) 166.0 (145.0–216.0) <0.01
Cut and peel 24.7 (23.8–24.8) 22.7 (22.0–23.0) 16.7 (15.7–17.7) 13.7 (13.7–17.0) <0.01 124.0 (84.5–177.5) 181.0 (170.0–182.0) 300.0 (268.0–300.0) 300.0 (300.0–300.0) <0.01
Tube anastomosis 25 (24.5–25.0) 23.7 (20.7–23.7) 16.0 (14.7–19.0) 15.0 (12.7–15.7) <0.01 81.0 (67.5–87.5) 76.0 (72.0–169.0) 272.0 (185.0–300.0) 227.0 (208.0–265.0) <0.01
Composite 74 (72.3–74.2) 65.7 (62.3–67.0) 50.0 (45.7–55.7) 39.7 (38.3–54.3) <0.01 279.0 (204.5–325.0) 385.0 (322.0–438.0) 640.0 (616.0–737.0) 693.0 (638.0–781.0) <0.01

Values are reported as median (IQR). Higher GEARS score represents increased proficiency. Lower task completion time represents increased proficiency. Composite score represents the sum of individual task scores, thus enabling evaluation of overall dry lab performance. Participants who did not complete a task within the 300-second time cap were assigned a completion time of 300 seconds.

GEARS = Global Evaluative Assessment of Robotic Skills; IQR = interquartile range.

Figure 3.

Figure 3

Composite dry lab (A) completion time and (B) GEARS scores by expertise level and differences in performance between cohorts. GEARS = Global Evaluative Assessment of Robotic Skills. SR = senior resident; JR = junior resident; MS = medical student.

Performance differences between cohorts were also evaluated for individual task time and GEARS score and generally mirrored the above findings (Table 3). However, there were certain dry lab tasks in which there were no significant differences in performance between attending surgeons and senior residents (“tape peel” time, “tube anastomosis” time, and “cut and peel” time/GEARS score), senior residents and junior residents (“tape peel” time/GEARS score and “tube anastomosis” time), and senior residents and medical students (“tape peel”/GEARS score). As with the composite dry lab results, there were no statistically significant differences in performance between junior residents and medical students across all individual tasks.

Table 3.

Statistical significance of task completion time and GEARS score differences between cohorts* and across all four cohorts** using Wilcoxon rank sum tests* and Kruskal-Wallis tests,** respectively.

Attendings vs. senior residents* Attendings vs. junior residents* Attendings vs. medical students* Senior residents vs. junior residents* Senior residents vs. medical students* Junior residents vs. medical students* All groups**
Tape peel GEARS <0.01 <0.01 <0.01 0.11 0.24 0.6 <0.01
Tape peel time 0.13 <0.01 <0.01 0.06 0.02 0.42 <0.01
Cut and peel GEARS 0.06 <0.01 <0.01 0.02 0.02 0.31 <0.01
Cut and peel time 0.2 <0.01 <0.01 0.06 0.05 0.44 <0.01
Tube anastomosis GEARS 0.02 <0.01 <0.01 0.05 <0.01 0.42 <0.01
Tube anastomosis time 0.56 <0.01 <0.01 0.06 0.03 1 <0.01
Composite dry lab GEARS <0.01 <0.01 <0.01 0.02 0.02 0.31 <0.01
Composite dry lab time 0.05 <0.01 <0.01 0.02 <0.01 0.42 <0.01

Green indicates statistical significance (p < 0.05).

GEARS = Global Evaluative Assessment of Robotic Skills.

*Statistical significance of task completion time and GEARS score differences between cohorts using Wilcoxon rank sum test.

**Statistical significance of task completion time and GEARS score differences across all four cohorts using Kruskal-Wallis test.

3.3. Content and face validity

Content and face validity were assessed by the attending surgeons (n = 7) via a posttask online survey using a 5-point Likert scale (Fig. 2). In terms of content validity, the majority of participants (71.4%) agreed or strongly agreed that 2 of the 3 dry lab tasks—“tape peel” and “tube anastomosis”—reproduced the technical skills necessary for their respective analog surgical steps, posterior dissection, and urethrovesical anastomosis. Only, 28.6% of participants were agreeable about task 3, “cut and peel.” Up to 57.1% of attending surgeons felt that proficiency in the dry lab tasks indicated that a resident could successfully perform the analog task in an actual case. However, 85.7% of attending surgeons felt that the proficiency of residents in performing dry lab tasks would increase their confidence and comfort in allowing the resident to independently perform the analog task in an actual case. In terms of face validity, only 14.3% and 28.6% of respondents felt that the dry lab material, 3M Microfoam surgical tape, mimicked human tissue, and had the appropriate thickness, respectively.

3.4. Acceptability

The acceptability of the model for incorporation into residency training was evaluated by all resident and attending participants (n = 17), and the results are presented in Figure 2. Most respondents agreed or strongly agreed that the model was easy to set up and use (100%), should be used to practice RALP technical skills (71.5%), and should be integrated into residency training (88.2%). The majority of residents felt that the dry lab models improved their confidence and ability to perform posterior dissection, neurovascular bundle release, and urethrovesical anastomosis in a live patient (90%, 80%, and 80%, respectively).

4. Discussion

Although several high-fidelity RALP simulation tools exist, significant barriers, including cost and resident time constraints, limit their use in residency training.[6,7] There is a need for feasible and accessible RALP training tools to augment traditional training methods. We developed a low-cost dry lab that simulates the basic technical competencies unique to the following 3 specific RALP steps: posterior dissection, neurovascular bundle release, and urethrovesical anastomosis. The model costs less than US $25 to create and can be easily reproduced using materials that are readily available within the hospital setting. The model was able to accurately differentiate between cohorts with different skill levels. The posterior dissection and urethrovesical anastomosis models were felt to be the most representative of their respective surgical tasks, with 72% of attending surgeons agreeing with their content validity. The neurovascular bundle release model was rated less favorably, with only 28% of attending surgeons agreeing with its content validity. Overall, the model was favorably rated in terms of its acceptability and utility for RALP surgical training. Most resident participants felt that the model improved their confidence and ability to successfully complete specific RALP procedural steps during live surgeries. Residents and attending participants both felt that the model should be used to practice RALP technical skills and be incorporated into residency training.

While prior dry labs have generally focused on and been validated for training generic robotic surgery competency,[4,5] these findings indicate a potential role for procedure-specific dry labs in resident robotic surgery training. Our findings suggest that this dry lab is the most effective for junior residents, as we identified a significant gap in proficiency between this cohort and senior residents/attending surgeons. Despite having significantly greater experience with robotic surgery, junior residents did not outperform robot-naive medical students using this dry lab model. Furthermore, the junior residents within our cohort had only acquired a median of 5 hours (IQR, 3–7 hours) of self-directed robotic surgery practice, which further suggests infrequent utilization of currently available robotic surgery training tools (ie, virtual reality). There is an opportunity to introduce dry lab training during early residency to help bridge the skill gap between junior and senior residents and better prepare trainees for live surgery.

Finally, there is an increasing emphasis on proficiency-based surgical training.[1517] Simulation may play an important role in assessing trainee proficiency and ability to operate safely on live patients.[17] The European Association of Urology’s validated RALP curriculum proposes using simulation, including dry labs, and GEARS evaluation for such assessment.[18] There is a need for standardized tools to assess trainee proficiency and provide self-directed practice. Our model was able to accurately discriminate between varying levels of expertise, especially when using the GEARS score (Fig. 3), and may help gauge the relative trainee proficiency level and direct training progression. Furthermore, 86% of attending surgeons agreed that the demonstration of dry lab proficiency by residents would increase their comfort and confidence in allowing them to perform the analog RALP step during live surgery.

While this dry lab should be considered for resident training, it has several important limitations. Notably, the sample size was modest, although comparable to similar dry lab preliminary validation studies.[4] This was a preliminary proof-of-concept study in which the dry lab was simplified to facilitate completion within a reasonable time frame for novice participants. For example, the “tube anastomosis” task was limited to a single interrupted suture. Going forward, the model may be modified and adapted to mimic more complex surgical steps, allowing for accommodation of training goals and skill levels. It should be noted that most expert participants felt the dry lab material (3M Microfoam surgical tape) did not adequately mimic the properties of human tissue or have the appropriate thickness. However, 88.2% of attending surgeons and residents felt that the model should be incorporated into surgical training, indicating the overall utility of this dry lab despite its differences compared with human tissue. This tool shows the greatest benefit to aid junior residents in developing muscle memory for complex movements, such as developing tissue planes and integration of the third arm. Future efforts will be focused on quantifying the ability of dry lab practice to improve live surgery performance and the development of additional, complex dry lab models to benefit senior residents in grasping more advanced surgical steps.

5. Conclusions

This low-cost procedure-specific dry lab demonstrated validity and acceptability for simulating technical steps during robotic prostatectomy and can be used to augment resident training. Our findings provide evidence that simple, low-fidelity dry labs are valuable for procedure-specific training. Additional efforts are warranted to develop dry lab models for the remaining steps in prostatectomy and to validate dry lab performance against intraoperative performance.

Acknowledgments

The authors thank the UCLA Department of Urology for its support, and the UCLA medical students, residents, and faculty who took time to participate in the dry lab and provided feedback.

Statement of ethics

This study was evaluated by the Research Ethics Board of UCLA and was exempted from a formal IRB review (IRB#20-001803). Subject participation was voluntary and verbal consent was obtained. All procedures performed in this study involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Funding source

None.

Author contributions

WB: Conceived the idea and supervised the project, along with RR;

AV, KK, KF, JB, JK: Carried out the dry labmodel, collected the data, and performed the analyses;

AV: Wrote the manuscript with input and critical feedback from all the authors.

Data availability

The datasets generated during and/or analyzed during the current study are not publicly available, but are available from the corresponding author on reasonable request.

Footnotes

How to cite this article: Kunitsky K, Venkataramana A, Fero KE, Ballon J, Komberg J, Reiter R, Brisbane W. Creation and validation of a novel low-cost dry lab for early resident training and assessment of robotic prostatectomy technical proficiency. Curr Urol 2024;18(2):133–138. doi: 10.1097/CU9.0000000000000192

Contributor Information

Kevin Kunitsky, Email: kevin.kunitsky@gmail.com.

Abhishek Venkataramana, Email: Abhishek.venkat@med.usc.edu.

Katherine E. Fero, Email: KFero@mednet.ucla.edu.

Jorge Ballon, Email: Jorge.Ballon@med.usc.edu.

Jacob Komberg, Email: jkomberg2018@health.fau.edu.

Robert Reiter, Email: rreiter@mednet.ucla.edu.

Wayne Brisbane, Email: Wayne.Brisbane@urology.ufl.edu.

Conflict of interest statement

The authors declare that they have no conflicts of interest.

References

  • 1.Herrell SD, Smith JA, Jr. Robotic-assisted laparoscopic prostatectomy: What is the learning curve? Urology 2005;66(5 suppl):105–107. [DOI] [PubMed] [Google Scholar]
  • 2.Abboudi H Khan MS Guru KA, et al. Learning curves for urological procedures: A systematic review. BJU Int 2014;114(4):617–629. [DOI] [PubMed] [Google Scholar]
  • 3.Khan R, Aydin A, Khan MS, Dasgupta P, Ahmed K. Simulation-based training for prostate surgery. BJU Int 2015;116(4):665–674. [DOI] [PubMed] [Google Scholar]
  • 4.Kozan AA, Chan LH, Biyani CS. Current status of simulation training in urology: A non-systematic review. Res Rep Urol 2020;12:111–128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Aydin A, Raison N, Khan MS, Dasgupta P, Ahmed K. Simulation-based training and assessment in urological surgery. Nat Rev Urol 2016;13(9):503–519. [DOI] [PubMed] [Google Scholar]
  • 6.MacCraith E, Forde JC, Davis NF. Robotic simulation training for urological trainees: A comprehensive review on cost, merits and challenges. J Robot Surg 2019;13(3):371–377. [DOI] [PubMed] [Google Scholar]
  • 7.Aydin A, Ahmed K, Shafi AM, Khan MS, Dasgupta P. The role of simulation in urological training—A quantitative study of practice and opinions. Surgeon 2016;14(6):301–307. [DOI] [PubMed] [Google Scholar]
  • 8.Aghazadeh MA, Mercado MA, Pan MM, Miles BJ, Goh AC. Performance of robotic simulated skills tasks is positively associated with clinical robotic surgical performance. BJU Int 2016;118(3):475–481. [DOI] [PubMed] [Google Scholar]
  • 9.Janus JR, Hamilton GS, 3rd. The use of open-cell foam and elastic foam tape as an affordable skin simulator for teaching suture technique. JAMA Facial Plast Surg 2013;15(5):385–387. [DOI] [PubMed] [Google Scholar]
  • 10.Ramirez AG Nuradin N Byiringiro F, et al. Creation, implementation, and assessment of a general thoracic surgery simulation course in Rwanda. Ann Thorac Surg 2018;105(6):1842–1849. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Elfaki A, Murphy S, Abreo N, Wilmot M, Gillespie P. Microfoam™ model for simulated tendon repair. J Plast Reconstr Aesthet Surg 2015;68(8):1163–1165. [DOI] [PubMed] [Google Scholar]
  • 12.Gallagher AG, Ritter EM, Satava RM. Fundamental principles of validation, and reliability: Rigorous science for the assessment of surgical education and training. Surg Endosc 2003;17(10):1525–1529. [DOI] [PubMed] [Google Scholar]
  • 13.Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin BJ. Global evaluative assessment of robotic skills: Validation of a clinical assessment tool to measure robotic surgical skills. J Urol 2012;187(1):247–252. [DOI] [PubMed] [Google Scholar]
  • 14.Schout BM, Hendrikx AJ, Scheele F, Bemelmans BL, Scherpbier AJ. Validation and implementation of surgical simulators: A critical review of present, past, and future. Surg Endosc 2010;24(3):536–546. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Scott DJ. Proficiency-based training for surgical skills. Semin Colon Rectal Surg 2008;19:72–80. [Google Scholar]
  • 16.Forster JA, Browning AJ, Paul AB, Biyani CS. Surgical simulators in urological training—Views of UK training programme directors. BJU Int 2012;110(6):776–778. [DOI] [PubMed] [Google Scholar]
  • 17.Gallagher AG Ritter EM Champion H, et al. Virtual reality simulation for the operating room: Proficiency-based training as a paradigm shift in surgical skills training. Ann Surg 2005;241(2):364–372. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Volpe A Ahmed K Dasgupta P, et al. Pilot validation study of the European Association of Urology robotic training curriculum. Eur Urol 2015;68(2):292–299. [DOI] [PubMed] [Google Scholar]

Articles from Current Urology are provided here courtesy of Wolters Kluwer Health

RESOURCES