Abstract
Background and Aims
Minimum EUS and ERCP volumes that should be offered per trainee in “high quality” advanced endoscopy training programs (AETPs) are not established. We aimed to define the number of procedures required by an “average” advanced endoscopy trainee (AET) to achieve competence in technical and cognitive EUS and ERCP tasks to help structure AETPs.
Methods
ASGE-recognized AETPs were invited to participate; AETs were graded on every fifth EUS and ERCP examination using a validated tool. Grading for each skill was done using a 4-point scoring system and learning curves (LCs) using cumulative sum (CUSUM) analysis for overall, technical, and cognitive components of EUS and ERCP were shared with AETs and trainers quarterly. Generalized linear mixed effects models with a random intercept for each AET were used to generate aggregate LCs allowing us to use data from all AETs to estimate the average learning experience for trainees.
Results
Among 62 invited AETPs, 37 AETs from 32 AETPs participated. The majority of AETs reported hands-on EUS (52%, median 20 cases) and ERCP (68%, median 50 cases) experience before starting an AETP. The median number of EUS and ERCPs performed/AET was 400 (range 200–750) and 361 (250–650), respectively. Overall, 2616 examinations were graded (EUS: 1277; ERCP-biliary: 1143; pancreatic: 196). The majority of graded EUS examinations were performed for pancreatobiliary indications (69.9%) and ERCP examinations for ASGE biliary grade of difficulty 1 (72.1%). The average AET achieved competence in core EUS and ERCP skills at approximately 225 and 250 cases, respectively. However, overall technical competence was achieved for Grade 2 ERCP at about 300 cases.
Conclusions
The thresholds provided for an average AET to achieve competence in EUS and ERCP may be used by ASGE and AETPs in establishing the minimal standards for case volume exposure for AETs during their training.
Keywords: competence, quality indicators, training
Advanced endoscopy training programs (AETPs) were established, in part, to address the inability of traditional 3-year Accreditation Council for Graduate Medical Education (ACGME)-accredited gastroenterology fellowship programs to provide comprehensive ERCP and EUS training.1, 2 Since their inception, these AETPs have evolved to offer training in a myriad of additional procedures, including endoscopic mucosal resection, endoluminal stent placement, deep enteroscopy, advanced closure techniques, bariatric endoscopy, therapeutic EUS, and submucosal endoscopy (including endoscopic submucosal dissection and per-oral endoscopic myotomy).3
Participation in this greater variety of procedures has commensurately reduced AET participation in ERCP and EUS. Moreover, ERCP and EUS procedures done at many training programs at tertiary care centers are performed for complex indications reducing exposure to basic maneuvers in ERCP (eg, cannulation of the desired duct in native papilla cases and sphincterotomy) and EUS (eg, staging of cancers and EUS-guided tissue acquisition).4 These factors may lead to suboptimal ERCP and EUS training. Given the recent transition from an apprenticeship and time-based model to competency-based medical education (CBME), an outcomes-based approach to the design, implementation, assessment and evaluation of medical education programs using an organizing framework of competencies is needed.1, 2, 5 With the ACGME’s Next Accreditation System (NAS), the focus has shifted to ensuring that specific milestones are reached throughout training, that competence is achieve by all trainees, and that these assessments are documented by training programs.6
Although the breadth of training has increased along with an increase in AETPs, there is no fixed mandatory curriculum and no set minimum standards as to what constitutes a “high quality” AETP. Establishing minimum standards, specifically with regards to procedure volumes offered per trainee, would ensure AETPs can appropriately tailor their training program and associated curriculum. Although establishing minimum standards does not imply that all trainees would achieve competence at this procedure volume, understanding the number of procedures required to achieve competence by the “average” AET in all aspects of ERCP and EUS would help structure AETPs to ensure the majority of trainees achieve the above stated outcome. These findings could have significant implications for national societies such as the American Society for Gastrointestinal Endoscopy (ASGE) that recommend minimal standards for case volume exposure for AETs. Thus, the primary aim of this study was to define the number of procedures required by an “average” AET to achieve competence in technical and cognitive ERCP and EUS tasks.
Methods
Study design
This was a prospective multicenter cohort study that included U.S. AETPs (Table 1). This study was conducted in 2 phases: in Phase 1, AETs were assessed during their advanced endoscopy fellowship training and, in Phase 2, participating AETs entered data pertaining to every EUS and ERCP during their first year of independent practice, anchored by key QIs.7, 8 The results of Phase 2, to measure outcomes during the first year of independent practice using established quality indicators in EUS and ERCP for AETP graduates who received continuous structured feedback, have been published separately.2 Data from Phase 1 were used to address the primary aim of this study. Approval from the Institutional Review Board or the Human Research Protection Office at each site involved was obtained (ClinicalTrials.gov: NCT02509416) and signed informed consent was obtained from all AETs. All authors had access to the study data and reviewed and approved the final manuscript.
Table 1:
List of participating advanced endoscopy training programs
| Institution | Location |
|---|---|
| Brigham and Women’s Hospital | Boston, Massachusetts |
| Carolinas Medical Center | Charlotte, North Carolina |
| Cleveland Clinic Foundation | Cleveland, Ohio |
| Columbia University | New York City, New York |
| Dartmouth-Hitchcock Medical Center | Lebanon, New Hampshire |
| Digestive Diseases Institute at Virginia Mason Medical Center |
Seattle, Washington |
| Duke University | Durham, North Carolina |
| GI Associates/Aurora Lukes Medical Center | Milwaukee, Wisconsin |
| Henry Ford Hospital | Detroit, Michigan |
| Icahn School of Medicine at Mount Sinai | New York, New York |
| Indiana University | Indianapolis, Indiana |
| Northwestern University | Chicago, IL |
| Mayo Clinic Jacksonville | Jacksonville, Florida |
| Moffitt Cancer Center | Tampa, Florida |
| Thomas Jefferson | Philadelphia, Pennsylvania |
| Stanford University | Stanford, California |
| Stony Brook University | Stony Brook, New York |
| University of Alberta, Edmonton | Alberta, Edmonton, Canada |
| University Hospitals Cleveland Medical Center | Cleveland, Ohio |
| University of California, Los Angeles | Los Angeles, California |
| University of California, Davis Health Systems | Davis, California |
| University of Colorado | Aurora, Colorado |
| University of Kansas | Kansas City, Kansa |
| University of Massachusetts Memorial Medical Center |
Worcester, Massachusetts |
| University of Michigan | Ann Arbor, Michigan |
| University of North Carolina, Chapel Hill | Chapel Hill, North Carolina |
| University of Pennsylvania | Philadelphia, Pennsylvania |
| University of Texas Southwestern | Dallas, Texas |
| University of Virginia | Charlottesville, Virginia |
| University of Wisconsin | Madison, Wisconsin |
| Vanderbilt University | Nashville, Tennessee |
| Washington University in St. Louis | St. Louis, Missouri |
Study setting and subjects
AETP directors and AETs from all U.S. ASGE registered advanced endoscopy fellowship programs (http://www.asgematch.com/) were invited to participate in this study from July 2015 to June 2017. AETs were defined as trainees who had completed a standard ACGME-accredited gastroenterology fellowship and were beginning a 1-year EUS and ERCP advanced endoscopy fellowship program. AETs completed questionnaires at study inception that assessed baseline characteristics and all AETs were introduced to the technical and cognitive aspects of EUS and ERCP in accordance with their institution-specific training curriculum.
Grading of advanced endoscopy trainees – Phase I
AETs were graded on every fifth EUS and ERCP after the completion of 25 hands-on EUS and ERCP examinations. This frequency of grading was chosen to improve the feasibility, reduce the overall burden of evaluations and to ensure that an adequate sample was available to analyze EUS and ERCP learning curves. Grading was standardized and performed by attending endoscopists at each center. Procedures in which AETs had no hands-on participation were excluded from grading. The study protocol required that the grading be performed immediately after the procedure to reduce recall bias, halo, and recency effect. To improve reproducibility with the grading protocol, the principal investigator (SW) conducted a standard setting exercise in addition to attaching behaviors to the anchors with the site principal investigators/program directors (Digestive Disease Week, May 2015). In addition, a digital presentation reviewing the assessment tool and grading protocol was distributed to all trainers and AETs to ensure familiarity with the tool’s specific assessment parameters and score explanations (Appendix 1). As mandated by the Institutional Review Board, this study did not standardize how instructors trained the AETs (eg, the amount of time that each AET was allowed to cannulate independently, the number of times the endoscope was handed back to the AET once assistance was required) and was left to the discretion of the instructor.
Competency assessment tool – The EUS and ERCP Skills Assessment Tool
We used TEESAT, a procedure-specific competence assessment tool with strong validity evidence endorsed by the ASGE, to assess EUS and ERCP skills in a continuous fashion throughout training (Supplementary Figures 1 and 2).1 TEESAT uses a 4-point scoring system for individual tasks that included all basic maneuvers and all relevant technical and cognitive aspects of EUS and ERCP [1 (superior), achieves independently; 2 (advanced), achieves with minimal verbal instruction; 3 (intermediate), achieves with multiple verbal instructions; 4 (novice), unable to complete, requiring the trainer to take over]. This tool makes a clear distinction for grading of procedures for biliary and pancreatic indications and also documents the ASGE ERCP grade of difficulty.9 This tool also includes a global rating scale (4-point scale) used to provide an overall assessment of the trainee: (1) learning basic technical and cognitive aspects, requires significant assistance and coaching, (2) acquired basic technical and cognitive skills but requires limited hands-on assistance and/or significant coaching, (3) able to perform independently with limited coaching and/or requires additional time to complete and (4) competent to perform procedure independently. These anchors allowed for trainers to attach behaviors and skills to anchors and ensure reproducibility over the course of the study. The endpoints used in this tool parallel the key quality metrics established for EUS and ERCP.7, 8, 10
Comprehensive data collection and reporting system
As we previously described,2, 11 an integrated, comprehensive data collection and reporting system was created to streamline data collection from the participating institutions and apply cumulative sum (CUSUM) analysis. All study participants entered their data on a University of Colorado instance of REDCap, a secure, online database system. A combination of an Application Programming Interface, REDCap, and SAS (v.9.3, SAS Institute, Cary, NC) were used to generate graphical representations of overall and individual endpoint CUSUM learning curves on demand. Access to these data was controlled by a custom module that determined authentication and role-based levels of access. Unique logins were provided to program directors and trainees, allowing them to view individual learning curves provided on a quarterly basis and compare individual performance with the study cohort average.
Study outcomes
The primary study outcome was to define the number of procedures required by an “average” AET to achieve competence in EUS and ERCP and associated core skills (eg, sphincterotomy or EUS-FNA). The secondary study outcome was to define the number of procedures required by an “average” AET to achieve competence in advanced ERCP procedures (ie, Grade 2 ERCPs).
Statistical Analysis
The trainers’ assessment was the criterion standard for this analysis. CUSUM analysis was applied to create learning curves for each trainee. By continuously studying the control charts, the performance of each trainee is compared with a predetermined standard, allowing for the detection of negative trends and enabling earlier feedback (retraining or continued observation). This approach to assessing learning curves and competence has been widely described in healthcare (cataract surgery, arthroscopy, anesthesia, and endoscopic procedures).2, 11–17 Success was defined as a TEESAT score of 1 (no assistance) or 2 (minimal verbal cues) whereas a score of 3 or 4 was considered a failure. For the overall global rating, a score of 3 or 4 represented success. Overall assessment for technical and cognitive competence in EUS and ERCP were based on the median score for all technical and cognitive endpoints listed in TEESAT.
The creation of CUSUM graphs as summarized by Bolsin and Colson18 has been described previously.2, 11 Successful procedures are given a score of s, and failed procedures are given a score of 1 – s. These values are based on pre-specified acceptable failure rates (p0, level of inherent error if procedures are performed competently) and unacceptable failures rates (p1, where p1-p0 represents the maximum acceptable level of human error). For this study, we used p0 = 0.1, and p1 = 0.3. CUSUM scores were then calculated using the following formulas: P = 1n (p1/p0); Q = 1n [(1-p1)/(1-p0); and s = Q/(P+Q) = 0.15, and 1- s = 0.85. The CUSUM curve was created by plotting the cumulative sum after each case against the index number of that case and Cn is the sum of all individual outcome scores. The CUSUM graph was designed to signal when Cn crosses predetermined limits. These limits are displayed as horizontal lines of the graph and calculated based on the risk for type I (α) and type II (β) error, which were both set at 0.1 for this analysis. The formula for H0 and H1 are as follows: H1 = a / (P+Q) and H0 = -b / (P+Q), where a = 1n[(1 – β)/α] and b = 1n[(1 - α)/β]. If the CUSUM plot fell below the acceptable line, the performance was acceptable with the predetermined type II error; if the CUSUM plot rose above the unacceptable line, the performance was considered unacceptable; if the plot stayed between the 2 boundary lines, no conclusion could be drawn and further training was recommended.
Comprehensive learning curves were created for individual technical and cognitive endpoints in addition to overall EUS and ERCP performance. In order to generate aggregate CUSUM learning curves across AETs, generalized linear mixed effects models were used with a random intercept for each fellow and an AR(1) covariance structure. This enabled data from all trainees to be used to estimate the average learning experience for trainees. A spline was then fitted to the modeled estimates with knots at 40 and 80 evaluations to smooth the results and estimate the mean number of procedures needed to achieve competence. All statistical analyses were performed using SAS v.9.4 (SAS Institute, Cary, NC).
Results
A total of 62 AETPs were invited to participate; of these, 37 AETs from 32 programs agreed to participate in this study. At baseline, most AETs had received formal training in cognitive aspects and hands-on training in EUS and ERCP. Specifically, 52% had hands-on training in EUS (median case volume 20) and 68% to ERCP (median case volume 40) before beginning their AETP.
At the end of training, the median number of EUS and ERCPs performed/AET was 400 (range 200–750) and 361 (250–650), respectively. Overall, 2616 examinations were graded (EUS: 1277, ERCP – biliary 1143, pancreatic 196). The majority of graded EUS examinations were performed for pancreatobiliary indications (69.9%). The majority of ERCP examinations were for ASGE biliary grade of difficulty 1 (72.1%). The mean number of native papilla ERCP evaluations was 21.2 (SD 13.9). As previously published, the majority of AETs achieved competency in EUS and ERCP at the end of the 1-year training period.2
Aggregate learning curves for ERCP
Data from all AETs were used to generate aggregate CUSUM learning curves to estimate the average learning experience for AETs and to define the number of procedures required by an average AET to achieve competence in technical and cognitive aspects of biliary ERCP (Table 2, Figure 1). Two core biliary skills, native papilla cannulation and biliary sphincterotomy, were the most challenging for trainees to achieve. For an average AET to achieve competence in native papilla cannulation, 226 ERCPs needed to be performed. At this point, the trainee will have performed 110 native papilla cannulations (95% CI, 85 – 135). Similarly, the average trainee requires 120 sphincterotomies (95% CI, 100 – 145) to achieve competency in biliary sphincterotomy; at this time point, the average fellow will have performed a total of 254 ERCPs. Thus, we estimate that the average trainee will require approximately 255 ERCPs to achieve routine biliary ERCP competency. Of note, this is discordant with estimated number of ERCPs performed when trainees achieve competency via the trainer’s “global assessment” of competency (165 ERCPs).
Table 2:
Competence rates in EUS and ERCP using TEESAT and global rating scale with the mean number of procedures required for competence for an average advanced endoscopy trainee
| No. of evaluations | No. of AETs achieving competence n (%)* | Number of procedures performing the skill required for competence for an average AET (95% CI) | Number of overall procedures performed at competence thresholds (if applicable) | |
|---|---|---|---|---|
| EUS | ||||
| EUS-FNA | 320 | 7 (63.6%) | 110 (90, 140) | 226 |
| Overall technical | 1151 | 22 (91.7%) | 125 (100, 155) | |
| Overall cognitive | 1113 | 22 (91.7%) | 135 (110, 160) | |
| Global rating scale | 1123 | 17 (70.8%) | 165 (135, 185) | |
| Overall cannulation | 774 | 15 (78.9%) | 105 (80, 130) | 230 |
| ERCP | ||||
| Cannulation in native papilla | 295 | 6 (54.5%) | 110 (85, 135) | 226 |
| Sphincterotomy | 318 | 8 (72.7%) | 120 (100, 145) | 254 |
| Stone clearance | 170 | 6 (85.7%) | 70 (60, 85) | 157 |
| Grade 2 ERCP | 230 | 110 (95, N/A) | 305 | |
| Overall technical | 972 | 17 (73.9%) | 140 (115, 175) | |
| Overall cognitive | 985 | 22 (95.7%) | 90 (60, 115) | |
| Global rating scale | 914 | 15 (75%) | 165 (130, N/A) | |
Primary analysis: TEESAT: success defined as score of 1 or 2 (no assistance/minimal verbal cues), Global rating scale: success defined as score of 3 or 4 (competent to perform procedure independently/able to perform independently with limited coaching and/or requires additional time to complete). Acceptable failure rate p0=0.1, and unacceptable failure rate p1=0.3
Figure 1: Graphical representation of learning curves.
Overall aggregate graphs using cumulative sum analysis demonstrating the mean number of cases (evaluation number multiplied by 5) needed for competency for an average advanced endoscopy trainee
We also looked at when trainees achieved overall technical competency in more difficult (Grade 2) biliary ERCP procedures. Trainees achieved overall technical competency after approximately 110 Grade 2 ERCPs, at which point they had performed 305 total ERCPs (95% CI, 250 - N/A). Only 4 AETs had enough data to generate meaningful learning curves for pancreatic ERCP and aggregate learning curves showed that the average AET would achieve cognitive but not technical competence in pancreatic ERCP at the end of their advanced endoscopy training. Limited number of evaluations precluded any learning curve analysis for cases requiring advanced cannulation techniques (pancreatic duct stent placement, double wire technique or precut sphincterotomy).
Aggregate learning curves for EUS
Similar to our analysis for learning curves for ERCP, we used data from all AETs to generate aggregate CUSUM learning curves to estimate the average learning experience for AETs and to define the number of procedures required by an average AET to achieve competence in technical and cognitive aspects of EUS (Table 2, Figure 1). The number of EUS procedures with FNA required for an average AET to achieve competence in EUS-FNA was 110 (95% CI, 90 – 140); at this time point, the average trainee will have completed 226 EUS examinations. Similar to ERCP, these findings were discordant with number of EUS procedures performed when estimated via a trainer “global assessment” of competency (165 EUS examinations).
Discussion
Despite the dramatic increase in AETPs, there is no fixed mandatory curriculum and no set minimal standards as to what constitutes a “high quality” AETP. The composition of AETPs has become more heterogenous with the broadening procedure portfolio of an interventional endoscopist. Thus, establishing specific minimum procedure volumes that trainees should be offered during their training helps ensure that trainees have the opportunity to achieve competence in the core interventional endoscopy procedures and also facilitate the process of trainee assessment through competency-based milestones. Specifically, understanding the number of procedures required to achieve competence by the average trainee in all core aspects of EUS and ERCP should help structure AETPs. In this study of 37 AETs from 32 AETPs, we found that the thresholds for an average AET to achieve competence in the core skills of EUS was approximately 225 cases and for ERCP was approximately 250 cases. We also found that trainees achieved competence in more complex ERCP (Grade 2) at approximately 305 total ERCPs. These volumes may be used by GI societies and AETPs in establishing the minimum standards for case volume exposure for AETs during training.
Medical procedure education has shifted from a volume-based to competency-based model. This shift reflects the understanding that trainees achieve procedural competency at varying speeds during training. Moreover, trainees may achieve competency in select skills (eg, biliary cannulation) more quickly than other skills (eg, biliary sphincterotomy). However, this shift towards competency assessment does not absolve the need for programs to be cognizant of ERCP and EUS procedure volumes offered to trainees. There is still great value in understanding how many ERCP and EUS procedures the “average” trainee will require during their fixed training period to achieve competency as programs try to decide how many different procedures they can realistically offer training on. We found that, for ERCP, the average trainee acquires competency in biliary sphincterotomy last, after completing approximately 250 ERCPs (with 120 total sphincterotomies). Similarly, for EUS, the average trainee acquires competency in EUSFNA last, after completing approximately 225 EUS examinations (with 110 total FNAs). The implications of these threshold volumes are evident. As programs develop larger portfolios (eg, submucosal endoscopy, endoscopic resection, bariatric endoscopy), AETPs must ensure that their AETs continue to participate in a large number of ERCP and EUS procedures. If this balance cannot be achieved, both the AETP and the AET will need to define which “procedures” they expect to achieve competency in and which procedures they expect to receive exposure to. We also found that there is value in additional procedure volume (approximately 300 total ERCPs) to achieve competence in complex ERCP, arguably one of the central goals of AETPs.
Our work highlights the continued difficulty in offering training in advanced ERCP techniques. Compared with biliary ERCP, this study showed that AETs receive minimal training in pancreatic ERCP during advanced endoscopy training. Despite using aggregate learning curves, we were unable to demonstrate that the average AET achieved technical competency in pancreatic ERCP. These results have significant implications for AETs and AETPs and novel strategies to increase AET training and exposure to pancreatic ERCPs and advanced EUS and ERCP procedures are warranted.
Available data defining the volume of cases required for an “average” AET to achieve competence are limited. Jowell et al19 showed that at least 180 ERCPs were required before trainees could be considered competent in ERCP (3 out of 17 trainees achieved this threshold). A systematic review highlighted the highly variable number of procedure volumes required to achieve competence (overall, 70–400; selective duct cannulation, 79–300; common bile duct cannulation rate, 160–400; and native papilla common bile duct cannulation, 350–400).20 Based on these limited data, societal guidelines have provided minimum procedure volumes wherein competence might be achieved in ERCP.1 The ASGE currently recommends that 200 supervised ERCPs need to be completed with at least 80 independent sphincterotomies and 60 biliary stent placements before competence can be assessed.21 The results of this study suggest that current training guidelines may underestimate the number of ERCPs required to achieve competence, especially as it relates to more complex ERCP. On the other hand, these results are consistent with our prior prospective multicenter study that established a minimum case load for training of 225 cases;22 a threshold recently endorsed by the ASGE.21 It should be noted that competency in general was “achieved” at lower thresholds when using a global rating scale compared with when competency for discrete core procedure skills was examined. The exact reason for this discordance is unclear. We used a loose definition wherein procedures that were performed independently by the trainee but with verbal coaching were included as “competent.” These data suggest that endoscopy trainers may overestimate competence using global assessment of competence and trainees may benefit from a forced evaluation of their trainees’ individual core skills.
Although the ASGE offers rudimentary metrics to characterize fellowships through the match program (https://www.asgematch.com), a more comprehensive evaluation of AETPs would be of value to potential trainees. It is in this context that we offer that these minimum ERCP and EUS volumes should serve as a basis for a more rigorous assessment of AETPs. Due to a lack of consensus criteria on what constitutes a “high quality” AETP, programs are unable to perform necessary self-assessment to improve training. We propose that structure, process, and outcomes measures should be defined and, using the established tools of evidence-based medicine, associated benchmarks determined. These quality metrics could then be used to guide AETs in the selection of a program. A similar process was recently published that defined elements of a high-quality surgical training and methods to measure them (Surgical Training Quality Assessment Tool – S-QAT) using the modified Delphi methodology.23 Data from our study provide one set of metrics that may be used by GI societies to establish minimal standards for high-quality AETPs as highlighted in Table 3.
Table 3:
Proposed standards assessment for advanced endoscopy training programs
| ERCP Training | |
|---|---|
| Structure Measure | Minimum number of trainers |
| Availability of EUS and ERCP | |
| Process Measure | Periodic competence assessment using a validated tool |
| Minimum case load with hands-on training - overall | |
| Minimum case load with hands-on training in native papilla cases | |
| Minimum case load with hands-on training in cases requiring sphincterotomy | |
| Outcomes Measure | Independent overall and native papilla cannulation rate |
| Adverse event rate | |
| EUS Training | |
| Structure Measure | Minimum number of trainers |
| Availability of EUS and ERCP | |
| Process Measure | Periodic competence assessment using a validated tool |
| Minimum case load with hands-on training - overall | |
| Minimum case load with hands-on training in cases requiring EUS-guided tissue acquisition | |
| Outcomes Measure | Adequacy rate of samples obtained during EUS-guided tissue acquisition |
| Adverse event rate |
There are several limitations of this study that merit discussion. This study did not include all AETPs in the United States, thus limiting the generalizability of these results. However, there was no difference in basic attributes such as number of AETs and annual EUS and ERCP volumes between participating and non-participating AETPs [www.asge.org/home/educationmeetings/training-trainees/advanced-endoscopy-fellowship-(aef)]. There is a potential for selection bias among AETs and trainers who opted to participate. The subjective nature of competence assessment among trainers is an inherent limitation of any study evaluating learning curves and competency. We attempted to address this through use of validated assessment tools and rater training. Similarly, this study included trainers with varying cumulative experience and training styles. However, this limitation was accounted for by the use of a standardized assessment tool (TEESAT) with strong validity evidence that has established and well-defined anchors for all specific endpoints. The limited number of participating AETs precluded a stratified analysis that was based on AET background training, and type of prior cases performed. Although there is potential for the Hawthorne effect (as AETs were aware of which procedure would be assessed), we hypothesize any effect would be minimal as trainees are likely to try and successfully achieve endpoints of interest (eg, cannulation or FNA) consistently through all cases. The interobserver and intraobserver agreement among trainers using TEESAT was not assessed in this study and should be addressed in future studies. The learning curves used in this study to establish the described thresholds were based on periodic feedback the AETs received on a quarterly basis and thus represent a “best-case scenario” in which trainees consistently received structured and benchmarked feedback. The opportunity to apply the TEESAT scores to refine and maximize the efficiency of training methods for particular advanced endoscopy techniques is an important area of future investigation. The strengths of this study include (1) defining the minimum case volume for EUS and ERCP procedures in one of the largest cohorts of AETs and AETPs, (2) using a standardized assessment tool with strong validity evidence and incorporating rater training, and (3) using a comprehensive data collection and reporting system with robust statistical methodology for aggregate learning curves using CUSUM analysis.
Conclusions
The results of this prospective multicenter study provide thresholds for an average AET to achieve competence in EUS and ERCP. These thresholds have significant implications for the ASGE and AETPs in defining what minimal procedure volumes AETPs must provide to ensure that most AETs have the opportunity to achieve competence in these core interventional endoscopy procedures. Because one of the goals of AETPs is to provide competency in both basic and advanced ERCP and EUS, our data suggest that AETPs should offer at least 300 ERCP and 225 EUS procedures, per trainee, to allow the average trainee to achieve competency
Supplementary Material
Supplementary Figure 1: TEESAT - ERCP Evaluation
Supplementary Figure 2: TEESAT - EUS evaluation
Acknowledgments
Funding/Support: This study was funded by the American Society for Gastrointestinal Endoscopy (ASGE) 2015 Endoscopy Research Award and the University of Colorado Department of Medicine Outstanding Early Scholars Program (SW). REDCap was supported by funding from NIH/NCRR Colorado CTSI Grant Number UL1 TR001082.
Role of the Funder/Sponsor: The ASGE had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or the decision to submit the manuscript for publication.
Conflict of Interest Disclosures: Dr. Buscaglia has received compensation for speaking and consulting for AbbVie and Boston Scientific. Dr. Kochman has received compensation for consulting for Boston Scientific, Dark Canyon Labs, Ferring, and Olympus. Dr. Mullady has received compensation for consulting for Boston Scientific and for speaking for AbbVie. Dr. Stevens has received compensation for speaking and consulting for AbbVie and Boston Scientific. Dr. Wani has received compensation for consulting for Boston Scientific and Medtronic. Other authors report no conflicts of interest.
Results of this study were presented in part as Oral Presentations at the Digestive Disease Week 2018, Washington DC.
Glossary
- ACGME
Accreditation Council for Graduate Medical Education
- AETPs
Advanced Endoscopy Training Programs
- AET
Advanced Endoscopy Trainee
- ASGE
American Society for Gastrointestinal Endoscopy
- CBME
Competency-Based Medical Education
- CUSUM
Cumulative Sum
- ERCP
Endoscopic Retrograde Cholangiopancreatography
- EUS
Endoscopic Ultrasound
- LCs
Learning Curves
- NAS
Next Accreditation System
- QIs
Quality Indicators
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Clinicaltrials.gov identifier: NCT02509416.
References
- 1.Wani S, Keswani RN, Petersen B, et al. Training in EUS and ERCP: standardizing methods to assess competence. Gastrointest Endosc 2018;87:1371–1382. [DOI] [PubMed] [Google Scholar]
- 2.Wani S, Keswani RN, Han S, et al. Competence in endoscopic ultrasound and endoscopic retrograde cholangiopancreatography, from training through independent practice. Gastroenterology 2018;(in press). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Elta GH, Jorgensen J, Coyle WJ. Training in interventional endoscopy: current and future state. Gastroenterology 2015;148:488–90. [DOI] [PubMed] [Google Scholar]
- 4.Cote GA, Singh S, Bucksot LG, et al. Association between volume of endoscopic retrograde cholangiopancreatography at an academic medical center and use of pancreatobiliary therapy. Clin Gastroenterol Hepatol 2012;10:920–4. [DOI] [PubMed] [Google Scholar]
- 5.Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: theory to practice. Med Teach 2010;32:638–45. [DOI] [PubMed] [Google Scholar]
- 6.Nasca TJ, Philibert I, Brigham T, et al. The next GME accreditation system--rationale and benefits. N Engl J Med 2012;366:1051–6. [DOI] [PubMed] [Google Scholar]
- 7.Adler DG, Lieb JG 2nd, Cohen J, et al. Quality indicators for ERCP. Gastrointest Endosc 2015;81:54–66. [DOI] [PubMed] [Google Scholar]
- 8.Wani S, Wallace MB, Cohen J, et al. Quality indicators for EUS. Gastrointest Endosc 2015;81:67–80. [DOI] [PubMed] [Google Scholar]
- 9.Cotton PB, Eisen G, Romagnuolo J, et al. Grading the complexity of endoscopic procedures: results of an ASGE working party. Gastrointest Endosc 2011;73:868–74. [DOI] [PubMed] [Google Scholar]
- 10.Mittal C, Obuch JC, Hammad H, et al. Technical feasibility, diagnostic yield, and safety of microforceps biopsies during EUS evaluation of pancreatic cystic lesions (with video). Gastrointest Endosc 2018;87:1263–1269. [DOI] [PubMed] [Google Scholar]
- 11.Wani S, Keswani R, Hall M, et al. A Prospective Multicenter Study Evaluating Learning Curves and Competence in Endoscopic Ultrasound and Endoscopic Retrograde Cholangiopancreatography Among Advanced Endoscopy Trainees: The Rapid Assessment of Trainee Endoscopy Skills Study. Clin Gastroenterol Hepatol 2017;15:1758–1767 e11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Leong P, Deshpande S, Irving LB, et al. Endoscopic ultrasound fine-needle aspiration by experienced pulmonologists: a cusum analysis. Eur Respir J 2017;50. [DOI] [PubMed] [Google Scholar]
- 13.Salowi MA, Choong YF, Goh PP, et al. CUSUM: a dynamic tool for monitoring competency in cataract surgery performance. Br J Ophthalmol 2010;94:445–9. [DOI] [PubMed] [Google Scholar]
- 14.Lee YK, Ha YC, Hwang DS, et al. Learning curve of basic hip arthroscopy technique: CUSUM analysis. Knee Surg Sports Traumatol Arthrosc 2013;21:1940–4. [DOI] [PubMed] [Google Scholar]
- 15.Smith SE, Tallentire VR. The right tool for the right job: the importance of CUSUM in self-assessment. Anaesthesia 2011;66:747; author reply 747–8. [DOI] [PubMed] [Google Scholar]
- 16.Patel SG, Rastogi A, Austin G, et al. Gastroenterology trainees can easily learn histologic characterization of diminutive colorectal polyps with narrow band imaging. Clin Gastroenterol Hepatol 2013;11:997–1003 e1. [DOI] [PubMed] [Google Scholar]
- 17.Ward ST, Mohammed MA, Walt R, et al. An analysis of the learning curve to achieve competency at colonoscopy using the JETS database. Gut 2014;63:1746–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Bolsin S, Colson M. The use of the Cusum technique in the assessment of trainee competence in new procedures. Int J Qual Health Care 2000;12:433–8. [DOI] [PubMed] [Google Scholar]
- 19.Jowell PS, Baillie J, Branch MS, et al. Quantitative assessment of procedural competence. A prospective study of training in endoscopic retrograde cholangiopancreatography. Ann Intern Med 1996;125:983–9. [DOI] [PubMed] [Google Scholar]
- 20.Shahidi N, Ou G, Telford J, et al. When trainees reach competency in performing ERCP: a systematic review. Gastrointest Endosc 2015;81:1337–42. [DOI] [PubMed] [Google Scholar]
- 21.Committee ASoP Faulx AL, Lightdale JR, et al. Guidelines for privileging, credentialing, and proctoring to perform GI endoscopy. Gastrointest Endosc 2017;85:273–281. [DOI] [PubMed] [Google Scholar]
- 22.Wani S, Hall M, Keswani RN, et al. Variation in Aptitude of Trainees in Endoscopic Ultrasonography, Based on Cumulative Sum Analysis. Clin Gastroenterol Hepatol 2015;13:1318–1325 e2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Singh P, Aggarwal R, Zevin B, et al. A global Delphi consensus study on defining and measuring quality in surgical training. J Am Coll Surg 2014;219:346–53 e7. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplementary Figure 1: TEESAT - ERCP Evaluation
Supplementary Figure 2: TEESAT - EUS evaluation

