Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Nov 1.
Published in final edited form as: Gastroenterology. 2018 Jul 26;155(5):1483–1494.e7. doi: 10.1053/j.gastro.2018.07.024

Competence in Endoscopic Ultrasound and Endoscopic Retrograde Cholangiopancreatography, From Training Through Independent Practice

Sachin Wani 1, Rajesh N Keswani 2, Samuel Han 1, Eva M Aagaard 3, Matthew Hall 4, Violette Simon 1, Wasif M Abidi 5, Subhas Banerjee 6, Todd H Baron 7, Michael Bartel 8, Erik Bowman 9, Brian C Brauer 1, Jonathan M Buscaglia 10, Linda Carlin 1, Amitabh Chak 11, Hemant Chatrath 12, Abhishek Choudhary 6, Bradley Confer 13, Gregory A Coté 14, Koushik K Das 3, Christopher J DiMaio 15, Andrew M Dries 16, Steven A Edmundowicz 1, Abdul Hamid El Chafic 17, Ihab El Hajj 18, Swan Ellert 1, Jason Ferreira 19, Anthony Gamboa 20, Ian S Gan 21, Lisa M Gangarosa 7, Bhargava Gannavarapu 2, Stuart R Gordon 19, Nalini M Guda 22, Hazem T Hammad 1, Cynthia Harris 23, Sujai Jalaj 7, Paul S Jowell 24, Sana Kenshil 25, Jason Klapman 23, Michael L Kochman 26, Srinadh Komanduri 2, Gabriel Lang 3, Linda S Lee 5, David E Loren 17, Frank J Lukens 8, Daniel Mullady 3, V Raman Muthusamy 12, Andrew S Nett 27, Mojtaba S Olyaee 28, Kavous Pakseresht 28, Pranith Perera 27, Patrick Pfau 9, Cyrus Piraka 29, John M Poneros 30, Amit Rastogi 28, Anthony Razzak 21, Brian Riff 15, Shreyas Saligram 23, James M Scheiman 27, Isaiah Schuster 10, Raj J Shah 1, Rishi Sharma 31, Joshua P Spaete 24, Ajaypal Singh 11, Muhammad Sohail 32, Jayaprakash Sreenarasimhaiah 33, Tyler Stevens 13, James H Tabibian 26, Demetrios Tzimas 10, Dushant S Uppal 34, Shiro Urayama 31, Domenico Vitterbo 33, Andrew Y Wang 34, Wahid Wassef 32, Patrick Yachimski 20, Sergio Zepeda-Gomez 25, Tobias Zuchelli 29, Dayna Early 3
PMCID: PMC6504935  NIHMSID: NIHMS1017793  PMID: 30056094

Abstract

BACKGROUND & AIMS:

It is unclear whether participation in competency-based fellowship programs for endoscopic ultrasound (EUS) and endoscopic retrograde cholangiopancreatography (ERCP) results in high-quality care in independent practice. We measured quality indicator (QI) adherence during the first year of independent practice among physicians who completed endoscopic training with a systematic assessment of competence.

METHODS:

We performed a prospective multicenter cohort study of invited participants from 62 training programs. In phase 1, 24 advanced endoscopy trainees (AETs), from 20 programs, were assessed using a validated competence assessment tool. We used a comprehensive data collection and reporting system to create learning curves using cumulative sum analysis that were shared with AETs and trainers quarterly. In phase 2, participating AETs entered data into a database pertaining to every EUS and ERCP examination during their first year of independent practice, anchored by key QIs.

RESULTS:

By the end of training, most AETs had achieved overall technical competence (EUS 91.7%, ERCP 73.9%) and cognitive competence (EUS 91.7%, ERCP 94.1%). In phase 2 of the study, 22 AETs (91.6%) participated and completed a median of 136 EUS examinations per AET and 116 ERCP examinations per AET. Most AETs met the performance thresholds for QIs in EUS (including 94.4% diagnostic rate of adequate samples and 83.8% diagnostic yield of malignancy in pancreatic masses) and ERCP (94.9% overall cannulation rate).

CONCLUSIONS:

In this prospective multicenter study, we found that although competence cannot be confirmed for all AETs at the end of training, most meet QI thresholds for EUS and ERCP at the end of their first year of independent practice. This finding affirms the effectiveness of training programs. Clinicaltrials.gov ID NCT02509416.

Keywords: Quality Indicators, Advanced Endoscopy Training, Learning Curves, The EUS and ERCP Skills Assessment Tool (TEESAT)

Graphical Abstract

graphic file with name nihms-1017793-f0003.jpg


The number of advanced endoscopy fellowship programs (AEFPs) has increased markedly in the past two decades.1 These fellowships were created to address inadequate endoscopic ultrasound (EUS) and endoscopic retrograde cholangiopancreatography (ERCP) training during standard 3-year Accreditation Council for Graduate Medical Education (ACGME)-accredited gastroenterology fellowships.2,3 There is widespread acknowledgement that EUS and ERCP are operator-dependent, technically challenging procedures requiring unique technical, cognitive, and integrative skills.3,4 Thus, it is imperative that AEFPs produce endoscopists who safely and effectively perform these high-risk endoscopic procedures in independent practice.58

A fundamental shift is gradually occurring at all levels of medical training in the United States as we transition from an apprenticeship model to competencybased medical education.3,9,10 Given the increasing emphasis on standardizing competence assessments and demonstrating readiness for independent practice, the ACGME replaced its reporting system with the Next Accreditation System, a continuous assessment reporting system focused on ensuring that specific milestones are reached throughout training, competence is achieved by all trainees, and these assessments are documented by training programs.3,11 Training programs have adapted in response, but the impact of these changes remains unclear.

Our prior research (1) confirmed substantial variability in learning curves and competence among advanced endoscopy trainees (AETs) in EUS and ERCP; (2) developed a task-specific tool with strong validity evidence for the assessment of EUS and ERCP competence—The EUS and ERCP Skills Assessment Tool (TEESAT); and (3) demonstrated the feasibility of a centralized database to report “on-demand” individualized EUS and ERCP learning curves that can identify targeted skill deficiencies and allow for tailored individualized remediation.3,4,1214 However, a critical question remains to be answered: “Does trainee participation in a competency-based fellowship program with continuous feedback translate to high-quality patient care in independent practice?” There are limited data on progression of learning curves in independent practice among procedure-based training programs. Although a recent American Society for Gastrointestinal Endoscopy (ASGE) and American College of Gastroenterology Joint Task Force on Quality in Endoscopy published documents highlighting quality indicators (QIs) in EUS and ERCP,15,16 it is unclear whether graduating AETs achieve these QIs. This has important implications because reimbursement in health care is increasingly tied to quality.

The primary aim of this study was to measure adherence to QI thresholds during the first year of independent practice among physicians who previously underwent systematic assessments of competence throughout their AEFP. The central hypothesis was that AETs who participate in a competency-based procedural training program with continuous feedback would meet QI thresholds in EUS and ERCP during their first year of independent practice.

Methods

Study Design

This was a prospective multicenter cohort study of AEFPs in the United States (Supplementary Table 1). Approval from the institutional review board or the human research protection office at each site involved was obtained (clinicaltrials.gov, NCT02509416) and signed informed consent was obtained from all AETs. All authors had access to the study data and reviewed and approved the final manuscript. This study was conducted in 2 phases: in phase 1, AETs were assessed during their advanced endoscopy fellowship training; in phase 2, participating AETs entered data pertaining to every EUS and ERCP examination during their first year of independent practice, anchored by key QIs.15,16

Study Setting and Subjects

Program directors and AETs at all U.S.-registered AEFPs (http://www.asgematch.com/) were invited to participate in this study. All AETs had completed a standard ACGME-accredited gastroenterology fellowship and were beginning a 1-year EUS and ERCP AEFP. AETs completed questionnaires at study inception that assessed baseline characteristics and at completion of phase 1 that assessed comfort level using a 5-point Likert scale, attitudes, and trends in independent practice (Supplementary Figures 1 and 2).4,15,16

Grading of AETs—Phase 1 (July 2015-June 2016)

AETs were graded on every fifth EUS and ERCP after the completion of 25 hands-on EUS and ERCP examinations. This frequency of grading was chosen to improve feasibility, decrease the overall burden of evaluations, and ensure that an adequate sample was available to analyze EUS and ERCP learning curves. Grading was standardized and performed by attending endoscopists at each center. Procedures in which AETs had no hands-on participation were excluded from grading. The study protocol required that the grading be performed immediately after the procedure to decrease recall bias, halo, and recency effect. The principal investigator (S.W.) conducted a standard setting exercise with the site principal investigators and program directors (Digestive Disease Week, May 2015). In addition, a digital presentation reviewing the assessment tool and grading protocol was distributed to all trainers and AETs.

Competency Assessment Tool—TEESAT

We used TEESAT, a procedure-specific competence assessment tool with strong validity evidence endorsed by the ASGE to assess EUS and ERCP skills in a continuous fashion throughout training (Supplementary Figures 3 and 4).3,12 TEESAT uses a 4-point scoring system for individual tasks and overall global rating scale.17 These anchors allowed for trainers to attach behaviors and skills to anchors and ensure reproducibility over the course of the study. The end points used in this tool parallel the key QIs established for EUS and ERCP.3,15,16

Comprehensive Data Collection and Reporting System

As we previously described,4 an integrated, comprehensive data collection and reporting system was created to streamline data collection from the participating institutions and apply cumulative sum (CUSUM) analysis. All study participants entered their data into the University of Colorado REDCap, a secure online database system. Using a combination of an Application Programming Interface, REDCap, and SAS 9.3 (SAS Institute, Cary, North Carolina), graphical representations of overall and individual end-point CUSUM learning curves were generated on demand. Access to these data was controlled by a custom module. Unique logins were provided to program directors and trainees, allowing them to view individual learning curves provided on a quarterly basis and compare individual performance with the study cohort average.

Performance of Trainees in Independent Practice—Phase 2 (July 2016-June 2017)

AETs who completed phase 1 were invited to participate in phase 2. In phase 2, participants reported performance on every EUS and ERCP examination during their first year of independent practice. The end points for this evaluation were based on key EUS and ERCP QIs (Supplementary Figures 5 and 6).15,16 Briefly, for EUS, these included (1) adequate sample obtained during EUS-guided fine-needle aspiration (FNA), (2) diagnostic yield of malignancy, and (3) occurrence of an adverse event (bleeding, perforation, or acute pancreatitis). For ERCP, these included (1) deep cannulation of the duct of interest, (2) successful extraction of common bile duct stones smaller than 1 cm, if present, (3) successful stent placement in patients with biliary obstruction, and (4) occurrence of an adverse event (bleeding, perforation, or acute pancreatitis).

Study Outcomes

The primary study outcome was adherence to established EUS and ERCP QIs during the first year of independent practice in AEFP graduates. Secondary outcomes were to (1) validate the feasibility of establishing a centralized online national database that enabled program directors and AETs to generate trainee-specific learning curves (overall and for individual end points) in relation to peers, (2) refine EUS and ERCP learning curves among AETs, (3) compare performance of AETs using a procedure-based competence assessment tool (TEESAT) and an overall global rating of competence, and (4) examine the perceptions and practice patterns among AETs in early independent practice.

Statistical Analysis

The trainers’ assessment was the gold standard for this analysis. CUSUM analysis was applied to create learning curves for each trainee. By continuously studying the control charts, the performance of each trainee is compared with a predetermined standard, allowing for the detection of negative trends and enabling earlier feedback (retraining or continued observation).3,4 This approach to assessing learning curves and competence has been widely described in health care.4,12,13,1827 In the phase 1 primary analysis, success was defined as a TEE-SAT score of 1 (no assistance) or 2 (minimal verbal cues), whereas a score of 3 or 4 was considered a failure. For the overall global rating, a score of 3 or 4 represented success. Overall scores for EUS and ERCP were based on the median score for all technical and cognitive end points. The creation of CUSUM graphs as summarized by Bolsin and Colson28 has been described previously.4,12 Successful procedures are given a score of s, and failed procedures are given a score of 1 — s. These values are based on prespecified acceptable failure rates (p0, level of inherent error if procedures are performed competently) and unacceptable failures rates (p1, where p1 — p0 represents the maximum acceptable level of human error). For this study, we used p0 = 0.1 and p1 = 0.3. Then, CUSUM scores were calculated using the following formulas:P=ln(p1/p0) ;Q=1n[(1p1)/(1p0)];s=Q/(P+Q)=0.15 ; and 1s=0.85. The CUSUM curve was created by plotting the CUSUM after each case against the index number of that case and Cn is the sum of all individual outcome scores. The CUSUM graph was designed to signal when Cn crossed predetermined limits. These limits are displayed as horizontal lines of the graph and were calculated based on the risk for type I (a) and type II (b) error, which were set at .1 for this analysis. The formulas for H0 and H1 were H1=a/(P+Q) and H0=b/(P+Q), wherea=1n[(1β)/α] andb=1n[(1α)/β]. If the CUSUM plot was below the acceptable line, then the performance was acceptable with the predetermined type II error; if the CUSUM plot was above the unacceptable line, then the performance was considered unacceptable; if the plot stayed between the 2 boundary lines, then no conclusion could be drawn and further training was recommended.

Comprehensive learning curves were created for individual technical and cognitive end points in addition to overall EUS and ERCP performance. The impact of variable unacceptable failure rates (p1) and the use of stringent definitions for success (score of 1 for individual end points on TEESAT or score of 4 on the global rating scale) on competence rates among AETs were explored in sensitivity analyses. AETs with fewer than 20 overall evaluations were excluded. We stratified the AETs by whether or not they had experience with EUS and ERCP and compared the proportions achieving competence withχ2 tests and the number of evaluations to achieve competence (among those achieving competence) using Wilcoxon rank-sum tests. For ERCP, we compared the proportion of cases that were ASGE grade 1 and the proportion of cases that were native papilla cannulations across AETs using χ2tests. Kappa (κ) statistics were used to compare the agreement between TEESAT and the overall global rating with regard to AETs achieving competence in EUS and ERCP (overall technical and cognitive success). The strength of rater agreement was categorized using criteria proposed by Landis and Koch29: 0.00–0.20, slight; 0.21–0.40, fair; 0.41–0.60, moderate; 0.61–0.80, substantial; 0.81–1.00, almost perfect. All data were analyzed directly from the centralized database using SAS.

Results

Of the 62 AEFPs invited to participate in phase 1 of this study, 32 (51.6%) programs including 37 AETs agreed to participate in this study; ultimately, 24 AETs from 20 training programs met the inclusion criteria (sufficient number of evaluations) to be included in the final analysis (Supplementary Table 1 and Supplementary Figure 7). At baseline, most AETs had received formal procedure-related cognitive training and hands-on training in EUS (52%; median case volume 20) and ERCP (68%; median case volume 40; Supplementary Table 2).

Phase 1—Learning Curves and Competence in EUS and ERCP

Endoscopic Ultrasound.

At the end of the advanced endoscopy training, AETs performed a median of 400 EUS examinations (range 200–750). A total of 1277 EUS examinations were assessed during phase 1 (70% performed for pancreatobiliary indications). The vast majority of AETs achieved overall technical and cognitive competence (91.7% for both) using the definition of success as a TEESAT score of 1 or 2 (primary analysis; Table 1). Variable results were noted for individual technical and cognitive end points, with lowest competence rates noted in the performance of EUS- FNA (63.6%). Figure 1 presents the graphical representation of learning curves in EUS using CUSUM analysis. There was no difference between AETs who had experience and those who did not in the proportion of AETs achieving competence (P = .99) or in the number of evaluations needed to achieve competence (P = .58).

Table 1.

Phase 1 Results—Advanced Endoscopy Trainees Achieving Competence in EUS

AETs achieving competence, n (%)
Study end point AETs meeting
inclusion criteria, n
Evaluations, n Primary
analysisa
Secondary
analysisb

Technical aspects
 Intubation 24 1146 24 (100) 24 (100)
 Body of pancreas 24 1014 23 (95.8) 14 (58.3)
 Tail of pancreas 23   966 21 (91.3) 13 (56.5)
 Head and neck of pancreas 23   956 21 (91.3) 14 (60.9)
 Uncinate process 19   746 15 (78.9)   3 (15.9)
 Ampulla 19   742 14 (73.7)   7 (36.8)
 Gallbladder 13   489 12 (92.3)   9 (69.2)
 CBD and CHD 21   849 15 (71.4)   9 (42.9)
 Porto-splenic confluence 22   887 20 (90.9)     11 (50)
 Celiac axis 22   972 22 (100) 16 (72.7)
 Achieve FNA 11   320   7 (63.6)   4 (36.3)
 Overall technical 24 1151 22 (91.7)     18 (75)
Cognitive aspects
 Identify lesion of interest, appropriately ruled out 23 1068 21 (91.3) 10 (43.5)
 Appropriate differential diagnosis 22   925 22 (100) 13 (59.0)
 Appropriate management plan 23   997 22 (95.7) 14 (60.9)
 Overall cognitive 24 1113 22 (91.7)     12 (50)

CBD, common bile duct; CHD, common hepatic duct.

a

In the primary analysis, success was defined using a score of 1 or 2 (no assistance or minimal verbal cues), an acceptable failure rate (level of inherent error if procedures are performed competently; p0 = 0.1), and an unacceptable failure rate (exceeding the maximum level of acceptable error rate; p1 = 0.3).

b

In the secondary analysis, success was defined as a score of 1 (stringent definition of success).

Figure 1.

Figure 1.

Learning curves of individual trainees achieving and those not achieving competence for the end point of overall ERCP and EUS technical competence. Learning curves were made with CUSUM analysis using median scores for overall technical and cognitive aspects of biliary ERCP and EUS (a positive deflection indicates an incompetent result [score of 3 or 4] and a negative deflection represents a competent result [score of 1 or 2]).

Endoscopic Retrograde Cholangiopancreatography.

At the end of training, AETs performed a median of 361 ERCPs (range 250–650). A total of 1,339 ERCP examinations were assessed during phase 1, of which the majority were performed for biliary indications (n = 1143, 85.4%). Of these biliary ERCPs, 67.5% were performed for choledocholithiasis and biliary strictures and 56.9% were performed in patients with a native papilla; 72.2% met the definition of ASGE grade of difficulty 1. We identified differences in distribution of assessed cases across AETs based on native papilla cannulations and ASGE grade of difficulty. The median percentage of grade of difficulty 1 cases and native papilla cannulation cases across AETs was 72.2% (interquartile range [IQR] 65–80) and 61.2% (IQR 44.8–75), respectively. This distribution varied significantly across AETs (P < .001) for the two end points. In our primary analysis, the proportion of AETs achieving overall technical and cognitive competence in biliary ERCP was 73.9% and 94.1%, respectively. The variable number of AETs achieving competence (primary analysis and stringent definition of success) for individual technical and cognitive end points in biliary ERCP is presented in Table 2. Consistent with prior results,4,12 although 78.9% of AETs achieved competence in overall cannulation, approximately half (54.5%) the AETs achieved competence for the end point of cannulation in cases with a native papilla. Figures 1 and 2 present graphical representations of learning curves in ERCP using CUSUM analysis. There was no difference between AETs who had experience and those who did not in the proportion of AETs achieving competence (P = .5) or in the number of evaluations needed to achieve competence (P = .1). The limited number of assessed ERCPs for pancreatic indications precluded any meaningful individual learning curve analysis for pancreatic ERCPs.

Table 2.

Advanced Endoscopy Trainees Achieving Competence in Biliary ERCP during Phase 1

AETs achieving competence, n (%)
Study end point AETs meeting
inclusion criteria, n
Evaluations, n Primary
analysisa
Secondary
analysisb

Basic maneuvers
 Intubation 23 984 23 (100)    20 (87)
 Achieving short position 22 951 21 (95.5) 19 (86.4)
 Identifying papilla 21 930 21 (100) 20 (95.2)
 Overall cannulation 19 774 15 (78.9)   5 (26.3)
 Cannulation—native papilla only 11 295   6 (54.5)   2 (18.2)
 Sphincterotomy 11 318   8 (72.7) 1 (9.1)
 Wire placement in desired biliary duct 17 662 15 (88.2)    5 (29.4)
 Balloon sweep 17 611 16 (94.1) 10 (58.8)
 Stone clearance   7 170   6 (85.7)   2 (28.6)
 Stricture dilation   4   92 3 (75)    2(50)
 Stent insertion   8 270   8 (100)    4 (50)
 Overall technical 23 972 17 (73.9)   6 (26.1)
Cognitive aspects
 Demonstrated clear understanding of indication 22 955 21 (95.5) 16 (72.7)
 Appropriate use of fluoroscopy 22 942 20 (90.9)   6 (27.3)
 Proficient use of real-time cholangiography 22 939 19 (86.4)   7 (31.8)
 Logical plan based on cholangiogram 22 946 18 (81.8) 10 (45.5)
 Demonstrated clear understanding of use of 17 595 16 (94.1) 11 (64.7)
  rectal indomethacin
 Overall cognitive 23 985 22 (95.7) 17 (73.9)
a

In the primary analysis, success was defined using a score of 1 or 2 (no assistance or minimal verbal cues), an acceptable failure rate (level of inherent error if procedures are performed competently; p0 = 0.1), and an unacceptable failure rate (exceeding the maximum level of acceptable error rate; p1 = 0.3).

b

In the secondary analysis, success was defined as a score of 1 (stringent definition of success).

Figure 2.

Figure 2.

Learning curves of individual trainees achieving competence for individual end points in ERCP. Graphical representation shows learning curves for cannulation overall, cannulation of NP cases, stone clearance, and sphincterotomy. Learning curves were made with CUSUM analysis using scores for individual end points (a positive deflection indicates an incompetent result [score of 3 or 4] and a negative deflection represents a competent result [score of 1 or 2]). NP, native papilla.

Practice Plans and Comfort Level in Performing EUS and ERCP at End of Phase 1

Of the 24 AETs, 19 (79.1%) completed the post-study questionnaire. Nearly all AETs strongly agreed or tended to agree they were comfortable independently performing EUS and ERCP (94% for both; Supplementary Table 3). Most AETs began their independent practice in an academic setting (n = 11, 57.9%) or in a practice with a highvolume senior partner performing EUS (n = 13, 68.4%) and ERCP (n = 15, 78.9%; Supplementary Table 4). Nearly all AETs expressed some difficulty in finding an advanced endoscopy position at completion of training. Credentialing was most often determined by number of procedures performed (63.2%) and/or completion of an AEFP (36.8%); proctoring at outset was infrequently used (21.1%).

Phase 2—Performance in First Year of Independent Practice

Of the 24 AETs included for final analysis in phase 1, 22 (91.6%) participated in phase 2 and completed a total of 3258 EUS and 2621 ERCP examinations during their first year of independent practice.

Endoscopic Ultrasound.

Study participants performed a median of 136.5 EUS procedures (IQR 102–204); 65% of all procedures were performed for pancreatobiliary indications and EUS-FNA was performed in 41.4% of all cases (Supplementary Table 5). Table 3 presents performance in the first year of independent practice based on key established QIs in EUS. In this cohort, the overall diagnostic rate of an adequate sample for all solid lesions undergoing EUS-FNA was 94.4% (range 77.1–100) and the performance target of at least 85%) was reached by 90.5% of participants. Similarly, the overall diagnostic rate for malignancy in patients undergoing EUS-FNA of pancreatic masses was 83.8% (range 45–100) and the performance target of at least 70% was reached by 81% of participants. The incidence of adverse events of acute pancreatitis, perforation, and bleeding was below the established threshold.

Table 3.

Performance of Advanced Endoscopy Trainees in First Year of Independent Practice Based on ASGE and American College of Gastroenterology Established Quality Indicators in EUS and ERCP (Phase 2)

Overall AET performance
AETs reaching
performance
target, n (%)
QIa (measure type and performance target) Procedures, n n (%) Range, %

EUS
 Diagnostic rate of adequate sample in all solid lesions undergoing EUS-FNA (outcome ≥ 85%) 1255 1185 (94.4) 77.1–100 19 (90.5)
 Diagnostic rates for malignancy in patients undergoing EUS-FNA of pancreatic masses (outcome ≥70%; priority indicator)   519   435 (83.8)   45–100    17(81)
 Incidence of adverse events after EUS-FNA
  Acute pancreatitis (outcome < 2%) 3258 13 (0.4) NA
  Perforation (outcome < 0.5%) 3258   2 (0.06) NA
  Clinically significant bleeding (outcome < 1%) 3258    8(0.25) NA
ERCP
 Frequency with which deep cannulation of ducts of interest is achieved (process NA) 2668 2532 (94.9)   84–100
 Frequency with which deep cannulation of ducts of interest in patients with native papillae is achieved (process > 90%; priority indicator) 1552 1445 (93.1) 76.5–100 17 (77.3)
 Frequency with which common bile duct stones < 1 cm are extracted successfully (outcome ≥ 90%) 1141 1068 (93.6) 62.1–100 18 (81.8)
 Frequency with which stent placement for biliary obstruction is successfully achieved (outcome ≥ 90%) 1325 1244 (93.9)   80–100 15 (68.2)
 Adverse events
  Rate of post-ERCP pancreatitis (outcome NA; priority indicator) 2673    67 (2.51)
  Rate of perforation (outcome ≤ 0.2%) 2673    9 (0.34)
  Rate of clinically significant hemorrhage (outcome ≤ 1%) 2673    22 (0.82)

NA, not applicable.

a

Based on the ASGE and American College of Gastroenterology QIs in EUS and ERCP.

Endoscopic Retrograde Cholangiopancreatography.

The median number of ERCPs completed in phase 2 was 116.5 (IQR 48–169). The most common indication was choledocholithiasis and 58.4% of cases involved a native papilla (Supplementary Table 6). Table 3 presents performance in the first year of independent practice based on established key QIs (process and outcome measures) in ERCP. The overall frequency with which deep cannulation of ducts of interest in native papilla cases was achieved was 93.1% (range 76.5–100) and 77.3% of participants met the performance target of higher than 90%. The frequency with which common bile duct stones smaller than 1 cm were extracted successfully was 93.6% and 81.8% met the performance target of at least 90%. Successful biliary stent placement was achieved in 93.9% of all cases. The overall adverse event rate was 3.7%, with a post-ERCP pancreatitis rate of 2.5%.

Subgroup Analyses

There was no difference in basic attributes between participating and nonparticipating advanced endoscopy training programs (Table 4). We compared the performance of TEESAT and the overall global rating in assessing overall technical and cognitive competence in EUS and ERCP (Supplementary Table 7). Agreement between TEESAT and the global rating scale for EUS competence was fair (technical: κ = 0.36, 95% CI —0.02 to 0.74; cognitive: κ = 0.01 36, 95% CI —0.01 to 0.74) and ERCP competence was slight (technical: κ = 0.01, 95% CI —0.28 to —0.26; cognitive: κ = 0.0).

Table 4.

Comparison of Advanced Endoscopy Training Programs

Programs included in
RATES2 study (n = 20)
Programs not included in
RATES2 study (n = 42)
P value

Number of AETs (median) 1 (1–2) 1 (1–2) .21
Annual ERCP volume (median) 480 (300–800) 450 (225–1015) .36
Annual EUS volume (median) 450 (300–1200) 400 (300–950) .35

RATES2, Rapid Assessment of Trainee Endoscopy Skills-2.

To measure the relation between achieving competence during training (phase 1) and outcomes at the end of first year of independent practice (phase 2), performance on quality indicators was compared between AETs confirmed to have achieved competence based on the definition of competence described earlier with those AETs not confirmed to have achieved competence. No difference in performance based on key QIs in EUS and ERCP was noted between the 2 groups (Supplementary Table 8).

Discussion

The primary goal of endoscopy training is to graduate competent individuals with a mindset of ongoing personal outcomes assessment and continuous quality improvement.30 However, there are scant data on the performance of endoscopists beginning independent practice. Thus, it is unclear whether our AEFPs produce “high-quality” independent practitioners. The results of this large multicenter prospective study demonstrate that most AETs achieved competence by the end of training. Moreover, although competence could not be confirmed for all AETs at the end of their AEFP, most AETs met QI thresholds for routine EUS and ERCP at the end of their first year of independent practice. The results of this study are timely as we transition from a volume-based to a value-based practice and thus must ensure that our training programs are producing high-quality independent practitioners.

This study demonstrates the substantial variability in EUS and ERCP learning curves among AETs. These results are consistent with prior studies4,12,13,31,32 and validate the proposed shift from relying on an absolute number of cases performed during training to determine competence to using well-defined performance thresholds with strong validity evidence.4 These results also are consistent with data on surgical training. In a recent prospective study, not all graduating U.S. general surgery residents were assessed as able to independently perform core procedures, raising the possibility that these graduates are not competent to begin independent practice.33 However, studies of this nature did not subsequently assess performance of trainees in independent practice. We reassuringly found that nearly all AETs achieved QI thresholds in EUS and ERCP at the end of their first year of independent practice. This suggests that even those AETs who do not demonstrate competence during training will show continuous improvement in independent practice, ultimately achieving high-quality care. This study also validates the feasibility of creating a centralized national database that allowed for continuous monitoring and reporting of individualized learning curves on demand using a novel comprehensive data collection and reporting system. In addition, this system allowed for monitoring performance in independent practice with provision of information on individual physicians’ key QIs. These results have important implications for medical educators, especially in procedure-based training programs.

The Next Accreditation System emphasizes the need for individualized, continuous feedback for trainees because this provides an opportunity for continuous selfimprovement and learning. AEFPs are challenged with assessing competence across different technical and cognitive skills. Although mounting evidence suggests that global rating scales demonstrate comparable reliability and validity compared with checklist-based assessment tools, there are limited data comparing these two approaches in advanced endoscopy training.4,34 Results of this study demonstrate poor agreement between an objective checklist-based evaluation tool (TEESAT) with strong validity evidence compared with an overall global rating in assessing AET competence in EUS and ERCP. Given the ability to provide meaningful targeted feedback regarding granular, educationally trustworthy activities that trainers and AETs can aim at and monitor performance with regard to key QIs in EUS and ERCP, our data suggest that competence assessment should be performed using a checklistbased evaluation tool. Our study questionnaires provide important data regarding practice patterns among AETs embarking on independent practice. Although most joined academic centers, consistent with the results of a recent survey study, a majority also expressed difficulty in finding jobs at the end of their training because of a saturated advanced endoscopy job market.35 Gastrointestinal trainees considering a career in therapeutic endoscopy need to be aware of these current trends. Interestingly, credentialing at most centers was determined by completion of advanced endoscopy training alone or by the number of procedures completed during training. Consistent with results of a recent nationwide survey,36 this study showed huge variations in credentialing practices and fewer than 50% of hospitals had any of the criteria recommended by the ASGE guidelines on credentialing to perform ERCP.6

Our study has several limitations. This study was not a randomized controlled trial that establishes the superiority of this approach of training compared with the current paradigm of training that uses the number of procedures performed during training as a surrogate of competence. Although these results are derived from self-reported outcomes, objective data from electronic medical records with regard to patient outcomes, adverse events, and mortality were not available. Although studies exploring these outcomes are not available in endoscopic training, objective data were successfully used to rank the clinical outcomes achieved by graduates of general surgery (in-hospital death, postoperative complication, length of stay) and obstetrics and gynecology (maternal complication rates) training programs.37,38 The possibility of recall and reporting bias inherent to self-reported data cannot be excluded. In addition, there is a risk that physicians in independent practice might “game the numbers” through “risk transfer,” leading to risk-shifting behavior and resultant higher performance rates on established QIs. This study did not include all AEFPs in the United States, limiting the overall generalizability of results. Although the potential for selection bias exists, there was no difference in basic attributes between participating and nonparticipating advanced endoscopy training programs. There also is the possibility of selection bias among AETs (inclusion of motivated AETs) and trainers (inclusion of selected cases for assessment of competence among AETs). This study also included trainers with different cumulative experience and training styles. These limitations were accounted for by the use of a standardized assessment tool with strong validity evidence that has descriptive anchors for specific end points. Differences in distribution of cases based on the ASGE grade of difficulty and proportion of native papilla cannulation cases across AETs could have affected the proportion of AETs achieving competence at the end of training and their performance in independent practice. Missing data are a well-described limitation in studies evaluating learning curves in endoscopic procedures and shown not to influence overall outcomes. This study demonstrated that most AETs expressed comfort level in performing basic EUS and ERCP at the end of their training. However, this study did not assess comfort level among trainers regarding AETs independently performing EUS and ERCP examinations. Apart from prior exposure to EUS and ERCP during general gastrointestinal fellowship training, this study was not designed to assess for other predictors of competence. These limitations need to be addressed in future studies.

Our study also has several strengths. The findings of this study provide the first empirical support for widely held intuitions regarding improvement in endoscopist learning curves in independent practice and the ability to meet QI thresholds. Data from this prospective multicenter study included the largest cohort of AETs and advanced endoscopy programs. This study also provided construct validity evidence for our assessment tool and data collection and reporting system using robust statistical methodology for learning curves using CUSUM analysis.

Conclusions

Excellence in endoscopic training requires a paradigm shift from an apprenticeship to a competence and outcomesbased model of medical education. This study demonstrates the substantial variability in learning curves in advanced endoscopy training. Although competence could not be confirmed for all AETs at the end of training, most met QI thresholds for routine EUS and ERCP at the end of their first year of independent practice. The feasibility of continuous monitoring and reporting of individualized learning curves on-demand with targeted feedback (core elements of competency-based medical education) can be exported to other procedure-based training programs and thus potentially raise the quality of medical education and patient outcomes.

Supplementary Material

Figures
Tables

WHAT YOU NEED TO KNOW.

BACKGROUND AND CONTEXT

There are limited data on the progression of learning curves in independent practice among procedure-based training programs focused on EUS and ERCP.

NEW FINDINGS

The majority of advanced endoscopy trainees participating in competency-based fellowship programs achieve competence in EUS and ERCP at the end of training and meet the quality indicator (QI) thresholds in EUS and ERCP at the end of their first-year of independent practice.

LIMITATIONS

Results on QIs are derived from self-reported outcomes within an inherent lack of a control group (no feedback). This study did not include all advanced endoscopy programs in the United States, thus limiting generalizability of results.

IMPACT

These results affirm the effectiveness of current training programs. The feasibility of reporting individualized learning curves on demand with targeted feedback can be exported to other procedure-based training programs.

Acknowledgments

Funding

This study was funded by the American Society for Gastrointestinal Endoscopy (ASGE) 2015 Endoscopy Research Award and the University of Colorado Department of Medicine Outstanding Early Scholars Program (to Sachin Wani). REDCap was supported by funding from the NIH/NCRR Colorado CTSI (grant UL1 TR001082). The ASGE had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Results of this study were presented in part as Presidential Plenary Oral Presentations at the Digestive Disease Week; Washington, DC; 2018.

Abbreviations used in this paper:

ASGE

American Society for Gastrointestinal Endoscopy

ACGME

Accreditation Council for Graduate Medical Education

AEFP

advanced endoscopy fellowship program

AET

advanced endoscopy trainee

CUSUM

cumulative sum

ERCP

endoscopic retrograde cholangiopancreatography

EUS

endoscopic ultrasound

FNA

fine-needle aspiration

IQR

interquartile range

QI

quality indicator

TEESAT

The EUS and ERCP Skills Assessment Tool

Footnotes

Supplementary Material

Note: To access the supplementary material accompanying this article, visit the online version of Gastroenterology at www.gastrojournal.org, and at https://doi.org/10.1053/j.gastro.2018.07.024.

Conflicts of interest

The authors disclose the following: Jonathan M. Buscaglia has received compensation for speaking and consulting for Abbvie and Boston Scientific. Michael L. Kochman has received compensation for consulting for Boston Scientific, Dark Canyon Labs, Ferring, and Olympus. Tyler Stevens has received compensation for speaking and consulting for Abbvie and Boston Scientific. Andrew Y. Wang has received research funding from Cook Medical. Sachin Wani has received compensation for consulting for Boston Scientific and Medtronic. Other authors report no conflicts of interest.

References

  • 1.Elta GH, Jorgensen J, Coyle WJ. Training in interventional endoscopy: current and future state. Gastroenterology 2015;148:488–490. [DOI] [PubMed] [Google Scholar]
  • 2.Wani S, Keswani R, Elta G, et al. Perceptions of training among program directors and trainees in complex endoscopic procedures (CEPs): a nationwide survey of US ACGME accredited gastroenterology training programs. Gastroenterology 2015;148:S-150. [Google Scholar]
  • 3.Wani S, Keswani RN, Petersen B, et al. Training in EUS and ERCP: Standardizing methods to assess competence. Gastrointest Endosc 2018;87:1371–1382. [DOI] [PubMed] [Google Scholar]
  • 4.Wani S, Keswani R, Hall M, et al. A prospective multicenter study evaluating learning curves and competence in endoscopic ultrasound and endoscopic retrograde cholangiopancreatography among advanced endoscopy trainees: the Rapid Assessment of Trainee Endoscopy Skills Study. Clin Gastroenterol Hepatol 2017;15:1758–1767 e11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Patel SG, Keswani R, Elta G, et al. Status of competencybased medical education in endoscopy training: a nationwide survey of US ACGME-accredited gastroenterology training programs. Am J Gastroenterol 2015; 110:956–962. [DOI] [PubMed] [Google Scholar]
  • 6.ASGE Standards of Practice Committee, Faulx AL, Lightdale JR, Acosta RD, et al. Guidelines for privileging, credentialing, and proctoring to perform GI endoscopy. Gastrointest Endosc 2017;85:273–281. [DOI] [PubMed] [Google Scholar]
  • 7.Polkowski M, Larghi A, Weynand B, et al. Learning, techniques, and complications of endoscopic ultrasound (EUS)-guided sampling in gastroenterology: European Society of Gastrointestinal Endoscopy (ESGE) Technical Guideline. Endoscopy 2012;44:190–206. [DOI] [PubMed] [Google Scholar]
  • 8.Springer J, Enns R, Romagnuolo J, et al. Canadian credentialing guidelines for endoscopic retrograde cholangiopancreatography. Can J Gastroenterol 2008; 22:547–551. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: theory to practice. Med Teach 2010; 32:638–645. [DOI] [PubMed] [Google Scholar]
  • 10.Waschke KA, Coyle W. Advances and challenges in endoscopic training. Gastroenterology 2018;154: 1985–1992. [DOI] [PubMed] [Google Scholar]
  • 11.Nasca TJ, Philibert I, Brigham T, et al. The next GME accreditation system—rationale and benefits. N Engl J Med 2012;366:1051–1056. [DOI] [PubMed] [Google Scholar]
  • 12.Wani S, Hall M, Wang AY, et al. Variation in learning curves and competence for ERCP among advanced endoscopy trainees by using cumulative sum analysis. Gastrointest Endosc 2016;83:711–719 e11. [DOI] [PubMed] [Google Scholar]
  • 13.Gaddam S, Ge PS, Keach JW, et al. Suboptimal accuracy of carcinoembryonic antigen in differentiation of mucinous and nonmucinous pancreatic cysts: results of a large multicenter study. Gastrointest Endosc 2015; 82:1060–1069. [DOI] [PubMed] [Google Scholar]
  • 14.Wani S, Cote GA, Keswani R, et al. Learning curves for EUS by using cumulative sum analysis: implications for American Society for Gastrointestinal Endoscopy recommendations for training. Gastrointest Endosc 2013; 77:558–565. [DOI] [PubMed] [Google Scholar]
  • 15.Wani S, Wallace MB, Cohen J, et al. Quality indicators for EUS. Gastrointest Endosc 2015;81:67–80. [DOI] [PubMed] [Google Scholar]
  • 16.Adler DG, Lieb JG II, Cohen J, et al. Quality indicators for ERCP. Gastrointest Endosc 2015;81:54–66. [DOI] [PubMed] [Google Scholar]
  • 17.Cotton PB, Eisen G, Romagnuolo J, et al. Grading the complexity of endoscopic procedures: results of an ASGE working party. Gastrointest Endosc 2011;73: 868–874. [DOI] [PubMed] [Google Scholar]
  • 18.Giacchino M, Bansal A, Kim RE, et al. Clinical utility and interobserver agreement of autofluorescence imaging and magnification narrow-band imaging for the evaluation of Barrett’s esophagus: a prospective tandem study. Gastrointest Endosc 2013;77:711–718. [DOI] [PubMed] [Google Scholar]
  • 19.Liu Z, Zhang X, Zhang W, et al. Comprehensive evaluation of the learning curve for peroral endoscopic myotomy. Clin Gastroenterol Hepatol 2018;16:1420–1426.e2. [DOI] [PubMed] [Google Scholar]
  • 20.Leong P, Deshpande S, Irving LB, et al. Endoscopic ultrasound fine-needle aspiration by experienced pulmonologists: a cusum analysis. Eur Respir J 2017;50(5). [DOI] [PubMed] [Google Scholar]
  • 21.Salowi MA, Choong YF, Goh PP, et al. CUSUM: a dynamic tool for monitoring competency in cataract surgery performance. Br J Ophthalmol 2010;94:445–449. [DOI] [PubMed] [Google Scholar]
  • 22.Lee YK, Ha YC, Hwang DS, et al. Learning curve of basic hip arthroscopy technique: CUSUM analysis. Knee Surg Sports Traumatol Arthrosc 2013;21:1940–1944. [DOI] [PubMed] [Google Scholar]
  • 23.Smith SE, Tallentire VR. The right tool for the right job: the importance of CUSUM in self-assessment. Anaesthesia 2011;66:747; author reply 747–748. [DOI] [PubMed] [Google Scholar]
  • 24.Patel SG, Rastogi A, Austin G, et al. Gastroenterology trainees can easily learn histologic characterization of diminutive colorectal polyps with narrow band imaging. Clin Gastroenterol Hepatol 2013;11:997–1003 e1. [DOI] [PubMed] [Google Scholar]
  • 25.Patel SG, Schoenfeld P, Kim HM, et al. Real-time characterization of diminutive colorectal polyp histology using narrow-band imaging: implications for the resect and discard strategy. Gastroenterology 2016;150:406–418. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Ward ST, Hancox A, Mohammed MA, et al. The learning curve to achieve satisfactory completion rates in upper GI endoscopy: an analysis of a national training database. Gut 2017;66:1022–1033. [DOI] [PubMed] [Google Scholar]
  • 27.Ward ST, Mohammed MA, Walt R, et al. An analysis of the learning curve to achieve competency at colonoscopy using the JETS database. Gut 2014;63:1746–1754. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Bolsin S, Colson M. The use of the Cusum technique in the assessment of trainee competence in new procedures. Int J Qual Health Care 2000;12:433–438. [DOI] [PubMed] [Google Scholar]
  • 29.Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33: 159–174. [PubMed] [Google Scholar]
  • 30.Mellinger JD, Damewood R, Morris JB. Assessing the quality of graduate surgical training programs: perception vs reality. J Am Coll Surg 2015;220:785–789. [DOI] [PubMed] [Google Scholar]
  • 31.Ekkelenkamp VE, Koch AD, de Man RA, et al. Training and competence assessment in GI endoscopy: a systematic review. Gut 2016;65:607–615. [DOI] [PubMed] [Google Scholar]
  • 32.James PD, Antonova L, Martel M, et al. Measures of trainee performance in advanced endoscopy: a systematic review. Best Pract Res Clin Gastroenterol 2016; 30:421–452. [DOI] [PubMed] [Google Scholar]
  • 33.George BC, Bohnen JD, Williams RG, et al. Readiness of US general surgery residents for independent practice. Ann Surg 2017;266:582–594. [DOI] [PubMed] [Google Scholar]
  • 34.Walzak A, Bacchus M, Schaefer JP, et al. Diagnosing technical competence in six bedside procedures: comparing checklists and a global rating scale in the assessment of resident performance. Acad Med 2015; 90:1100–1108. [DOI] [PubMed] [Google Scholar]
  • 35.Granato CM, Kaul V, Kothari T, et al. Career prospects and professional landscape after advanced endoscopy fellowship training: a survey assessing graduates from 2009 to 2013. Gastrointest Endosc 2016;84:266–271. [DOI] [PubMed] [Google Scholar]
  • 36.Cotton PB, Feussner D, Dufault D, et al. A survey of credentialing for ERCP in the United States. Gastrointest Endosc 2017;86:866–869. [DOI] [PubMed] [Google Scholar]
  • 37.Bansal N, Simmons KD, Epstein AJ, et al. Using patient outcomes to evaluate general surgery residency program performance. JAMA Surg 2016;151:111–119. [DOI] [PubMed] [Google Scholar]
  • 38.Asch DA, Nicholson S, Srinivas S, et al. Evaluating obstetrical residency programs using patient outcomes. JAMA 2009;302:1277–1283. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Figures
Tables

RESOURCES