Abstract
BACKGROUND & AIMS
Studies have reported substantial variation in the competency of advanced endoscopy trainees, indicating a need for more supervised training in endoscopic ultrasound (EUS). We used a standardized, validated, data collection tool to evaluate learning curves and measure competency in EUS among trainees at multiple centers.
METHODS
In a prospective study performed at 15 centers, 17 trainees with no prior EUS experience were evaluated by experienced attending endosonographers at the 25th and then every 10th upper EUS examination, over a 12-month training period. A standardized data collection form was used (using a 5-point scoring system) to grade the EUS examination. Cumulative sum analysis was applied to produce a learning curve for each trainee; it tracked the overall performance based on median scores at different stations and also at each station. Competency was defined by a median score of 1, with acceptable and unacceptable failure rates of 10% and 20%, respectively.
RESULTS
Twelve trainees were included in the final analysis. Each of the trainees performed 265 to 540 EUS examinations (total, 4257 examinations). There was a large amount of variation in their learning curves: 2 trainees crossed the threshold for acceptable performance (at cases 225 and 245), 2 trainees had a trend toward acceptable performance (after 289 and 355 cases) but required continued observation, and 8 trainees needed additional training and observation. Similar results were observed at individual stations.
CONCLUSIONS
A specific case load does not ensure competency in EUS; 225 cases should be considered the minimum caseload for training because we found that no trainee achieved competency before this point. Ongoing training should be provided for trainees until competency is confirmed using objective measures.
Keywords: RATE US Study, Training, Competency, Cumulative Sum Analysis
Competency-based medical education (CBME) represents a shift in medical education in which competency is assessed by trainees achieving milestones rather than a prerequisite number of required procedures. Although all gastroenterology training programs will be required to move toward CBME, there is a paucity of data supporting its use in basic and complex endoscopy. Endoscopic ultrasound (EUS) has become integral to the diagnosis and staging of gastrointestinal (GI) malignancy and lesions adjacent to the GI lumen.1 This procedure is operator dependent and training in EUS requires the development of technical, cognitive, and integrative skills beyond that required for standard endoscopic procedures. Unfortunately, the intensity and length of training, the requisite curriculum and extent of theoretical learning, and the minimum number of procedures required to ensure competency are not well defined.2 With the expanding indications and applications of EUS and the growing number of third tier training programs, standardization of the performance of EUS and the definition of competency and training is of paramount importance.
The American Society for Gastrointestinal Endoscopy recommends a minimum of 150 total supervised procedures, 75 of which have a pancreatobiliary indication and 50 cases of fine-needle aspiration (FNA) (25 of which should be pancreatic FNA) before competency can be determined.3 The European Society of Gastrointestinal Endoscopy guidelines recommend a minimum of 20 and 30 supervised EUS-FNA on nonpancreatic and pancreatic lesions, respectively.4 However, these guidelines are based on limited data and expert opinion. These numbers have not been validated with regard to competency and feasibility and outcome of training. Guidelines do not account for the different rates at which people learn5 and, in fact, many experts believe that the majority of trainees will require double the number of proposed procedures to achieve competency in EUS.6 Thus, a specific case load or a set number of procedures performed during training does not ensure competence in EUS.2 In addition, a survey of GI fellowship directors suggested that most 3-year and many advanced endoscopy trainees (AETs) receive insufficient EUS training.7 In a recent prospective pilot study, using a novel comprehensive EUS competency tool using cumulative sum analysis (CUSUM), we showed that there was substantial variability in achieving competency and a consistent need for more supervision among AETs than the current American Society for Gastrointestinal Endoscopy recommendation of 150 cases.2
Given the increasing emphasis on quality metrics and competency in health care, the Accreditation Council for Graduate Medical Education (ACGME) recently announced plans to replace their current reporting system in 2014 with the Next Accreditation System (NAS). Within the realm of advanced endoscopy training, GI societies need to respond to these needs by adopting CBME and an outcomes-based approach to evaluate AETs. Thus, using a standardized data collection tool, the aim of this multicenter study was to prospectively define learning curves and measure competency in EUS in a large cohort of AETs across multiple US training programs using CUSUM analysis.
Methods
Study Design
This was a prospective multicenter trial conducted at 15 tertiary referral centers. This study was approved by the Human Research Protection Office at each participating center. All authors had access to the study data and reviewed and approved the final manuscript.
Study Subjects and Data Collection
AETs from these centers participated in this study from July 2012 to June 2013. The baseline EUS training level of trainees was assessed at all participating centers. All trainees had completed a 3-year gastroenterology fellowship in the United States and none had any prior experience or training in EUS (<25 hands-on EUS examinations and no prior experience with EUS-FNA during the standard gastroenterology fellowship). All trainees consented to be evaluated for the study and were introduced to both the cognitive and technical aspects of EUS procedures at the onset of their training. Experienced attending endosonographers at each of these centers were responsible for EUS training.
The study methodology was similar to our previously described pilot study.2 Starting with the 25th hands-on EUS examination, each trainee was graded on every 10th upper EUS examination. Grading involved the ability to perform endoscopic intubation and clear identification of important landmarks at various EUS stations. These included the aortopulmonary window and subcarina, celiac axis, body of pancreas, tail of pancreas, portosplenic confluence, head and neck of pancreas, common bile and hepatic duct, gallbladder, uncinate process, and ampulla. When applicable, the trainee also was graded on the ability to identify the lesion of interest, assign an appropriate TNM stage in suspected malignancy, characterize the wall layer of subepithelial lesions, and technical success with FNA.
A 5-point scoring system was used to grade the earlier-described end points: 1, no assistance needed; 2, minimal assistance (one verbal instruction needed); 3, moderate assistance (multiple verbal instructions); 4, significant assistance (hands-on assistance); and 5, unable to achieve. The process of systematically categorizing evaluation was explained, discussed, and clarified by the principal investigator and all participating centers individually. This grading system was discussed and standardized among all attending endosonographers (Figure 1). All trainees had at least 1 minute per station before any instructions were provided. Feedback was provided to the trainee at the end of each examination. Procedural complications were documented. EUS examinations during which the trainee had no hands-on participation were excluded. If the 10th examination scheduled for grading was an incomplete procedure for reasons such as medical instability, esophageal obstructing tumor, or was a lower EUS then this examination was not graded and the following examination was graded and then the scheduled grading reverted to every 10th examination.
Figure 1.
Data collection form.
Endoscopic Ultrasound Procedure
EUS examinations were performed using a curvilinear or radial array echoendoscope. The decision to use a curvilinear, radial array, or both echoendoscopes for each procedure was up to the discretion of the attending endosonographers. In cases in which both the radial and curvilinear echoendoscopes were used, the grading was performed using only a single echoendoscope at the discretion of the supervising attending.
Study Objectives
By using a standardized data collection tool, the primary objective of this study was to assess the overall learning curve using median scores for all stations and end points. The secondary objective was to assess learning curves for individual stations during the EUS examination when at least 10 evaluations for each station had been performed.
Statistical Analysis and Data Management
The research coordinator at Washington University in St. Louis collected the grading forms from all participating centers and performed the data entry. All patient identifiers were deleted in compliance with the Health Information Portability and Accountability Act regulations. AETs were de-identified and data were entered into the database using Microsoft Excel for Windows 2007 (Microsoft Corp, Redmond, WA). Data analysis was performed by a senior outcomes researcher (M.H.) using SAS version 9.3 (SAS Institute, Cary, NC).
CUSUM analysis was applied to assess the learning curve in EUS for each trainee. These control charts continuously assess the performance of an individual against a predetermined standard to detect adverse trends and to allow for early intervention (re-training or continued observation).2 In the assessment of EUS performance using the grading system described earlier, a rating of 1 was considered a success and a rating greater than 1 was considered a failure. The ability to obtain scores of 1 overall for the entire EUS procedure and for each station and other end points was tracked. The overall score for the entire EUS procedure was calculated as the median performance of the stations and other end points. As an example, if the trainee was rated on 10 items and achieved a score of 1 on 6 items and a score of 2 on 4 items, the median overall score would be 1.
The following is a summary of CUSUM analysis published by Bolsin and Colson8 and as highlighted by other investigators. Successful procedures are given a score of s, and failed procedures are given a score of 1 − s. These values are based on prespecified acceptable failure rates (p0, level of inherent error if procedure is performed correctly) and unacceptable failure rates (p1, where p1−p0 represents the maximum acceptable level of human error). For this study, we used p0 = 0.1 and p1 = 0.2. Then, P = 1n (p1/p0); Q = 1n [(1 − p1)/(1 − p0)]; and s = Q/(P + Q) = 0.15, and 1 − s = 0.85.
The CUSUM curve is created by plotting the cumulative sum after each case (subtracting s for successes and adding 1 − s for failures) against the index number of that case.
In this study, the penalty value (1 − s) was modified because evaluations were performed for every 10th EUS examination. If the penalty was not modified, to overcome a single failure (0.85), a fellow would have to have a minimum of 6 successes (0.15 * 6 = 0.90). Because evaluations were performed every 10th EUS examination, this would be a sequence of 60 examinations. Therefore, 2 plausible imputations of the missing 9 examinations between the evaluations were used: assume all were correct (CUSUM = −1.2), and assume all were correct but one (CUSUM = −0.2). These 2 values were subtracted from 1 − s and averaged to yield the revised penalty value of 0.15.
Decision limits (H0 and H1) then were calculated based on the type I (α) and type II (β) errors as follows: H1 = a/(P + Q) and H0 = b/(P + Q) where a = 1n [(1 − β)/α] and b = 1n [(1 − α)/β]. Because α = β = 0.1 for this analysis, H0 = −2.71 and H1 = 2.71.
An upward projection of the CUSUM score suggested a success rate of less than that which was expected, and a stable or downward projection indicated increasing competency. If the CUSUM curve crossed the upper decision limit from below, the failure rates had reached preset unacceptable rates, which would require further training, and the CUSUM was reset to zero and begun again.9 If the CUSUM curve crossed the lower decision limit from above then the failure rates were within the preset acceptable rates and competency was achieved. No further observation was necessary. If the CUSUM curve remained between the 2 decision limits, then continued observation was necessary. The gold standard for this analysis was the impression of the attending endosonographer. Sensitivity analyses assessing learning curves varying the unacceptable failure rates (p1) between 0.15 and 0.35 were performed. In addition, competency also was assessed using a less stringent definition of success: median scores of 1 (no assistance) or 2 (one instruction) to reflect success.
Results
A total of 17 AETs from 15 centers with advanced endoscopy training programs participated in this study. There were 5 trainees with fewer than 10 evaluations who were not included in the final analysis. There was a total of 4257 EUS examinations, and there was wide variation in the number of EUS procedures performed by the 12 included trainees at the different participating centers (range, 231–540 per trainee). The overall number of evaluations for this study was 296 (range, 12–59 per trainee). The distribution of evaluations based on the echoendoscope used was as follows: linear, 67%; radial, 18%; linear and radial, 7%; and not recorded, 8%. The evaluations reported 2 adverse events: 1 case of duodenal bulb perforation that required surgery and 1 case of mild post-EUS FNA pancreatitis requiring admission for 24 hours.
Overall Learning Curve Analysis
A graphic representation of learning curves among the trainees using median scores across all stations is presented in Figure 2. A positive deflection indicated a false result whereas a negative deflection represented a true result. Crossing the lower limit threshold indicated the performance was within the acceptable failure rate of 10% and crossing the upper limit threshold suggested an unacceptable failure rate of 20%. These data show substantial variability among trainees with only 2 trainees crossing the threshold for acceptable performance at EUS case numbers 225 and 245. Two trainees showed a trend toward acceptable performance after 289 and 355 cases, but were categorized into the group that needed ongoing observation. Eight trainees showed a need for ongoing training and observation. Learning curves for individual stations are shown in Table 1 and in Supplementary Figures 1, 2, and 3.
Figure 2.
Overall graphic representation of the learning curve among all trainees by using CUSUM analysis.
Table 1.
Advanced Endoscopy Trainees Achieving Competency (Overall and Individual Stations) Based on Case Number
| Number of fellows |
Case number | |
|---|---|---|
| Overall competency | 2 | 225, 245 |
| Head and neck of pancreas | 2 | 265, 385 |
| Body of pancreas | 3 | 265, 205, 280 |
| Tail of pancreas | 1 | 225 |
| Portosplenic confluence | 2 | 285, 385 |
| Common bile duct, common hepatic duct | 1 | 365 |
| Ampulla | 1 | 485 |
| Gallbladder | 2 | 295, 435 |
| Uncinate process | 1 | 405 |
| Celiac axis | 4 | 265, 225, 320, 255 |
| Aortopulmonary window/subcarina | 4 | 265, 225, 320, 255 |
| Endotracheal intubation | 7 | 285, 205, 235, 275, 225, 46, 215 |
Subgroup Analysis
The impact of acceptable and unacceptable failure rates used in the primary analysis was explored. By using less stringent criteria of acceptable and unacceptable failure rates of 10% and 30%, similar results were noted. A total of 4 trainees crossed the threshold for competency. Based on these criteria, 3 trainees achieved competency in evaluating head/neck of the pancreas, 5 trainees achieved competency in evaluation of body of the pancreas, 4 trainees achieved competency for tail, and 1 trainee achieved competency for the uncinate process. Two trainees crossed the lower threshold in evaluation of common bile duct/common hepatic duct and the ampulla and 3 trainees for portosplenic confluence. Seven trainees achieved competency in evaluating the celiac axis and 4 trainees achieved competency in evaluation of the mediastinum (Table 2).
Table 2.
Effect of Different Values of p1 on the Number of Fellows Showing Competency
| p1 | |||||
|---|---|---|---|---|---|
|
|
|||||
| 0.15 | 0.20 | 0.25 | 0.30 | 0.35 | |
| Median (rating of 1 or 2 = success) | 1 | 3 | 6 | 8 | 8 |
| Median (rating of 1 = success) | 1 | 2 | 4 | 4 | 5 |
| Endotracheal intubation | 4 | 7 | 9 | 11 | 11 |
| Body | 1 | 3 | 3 | 5 | 5 |
| Tail | 1 | 2 | 3 | 4 | 4 |
| Head and neck | 0 | 2 | 3 | 3 | 3 |
| Portosplenic confluence | 0 | 3 | 4 | 4 | 4 |
| Common bile and hepatic duct | 0 | 1 | 2 | 2 | 2 |
| Ampulla | 0 | 1 | 2 | 2 | 2 |
| Gallbladder | 0 | 2 | 3 | 3 | 4 |
| Uncinate | 0 | 1 | 1 | 1 | 3 |
| Aortopulmonary window subcarina | 1 | 2 | 2 | 4 | 4 |
| Celiac axis | 1 | 4 | 5 | 7 | 7 |
Figure 3 shows the overall learning curves using a less stringent definition of success (median score, 1 or 2). Similar results were noted using this definition and a total of 3 trainees achieved competency. Different values of p1 yielded different results in terms of defining trainees as competent (Table 2). Adjustment of the p1 value from 0.15 to 0.35 increased the proportion of trainees who were deemed competent.
Figure 3.
Graphic representation of the overall learning curve using a less stringent definition for success (rating, 1 or 2).
Discussion
CBME, defined as “outcomes-based approach to the design, implementation, assessment and evaluation of a medical education program using an organizing framework of competencies,” is quickly moving from theory to reality for subspecialty fellowship training.10 The ACGME recently announced plans to replace their current reporting system in 2014 with the NAS, which focuses on the following: (1) ensuring that milestones are reached at various points in training, (2) ensuring that competence is achieved by all trainees, and (3) making certain that these assessments are documented by their programs. Advanced endoscopy training programs should adopt competency-based training and show that trainees have attained the technical, cognitive, and integrative skills that are required for safe and effective unsupervised practice in advanced endoscopy. The current practice of determining competency based on the number of procedures and global impression of trainees’ competence at the end of training without the use of predefined criteria needs to be changed.
Although competency-based medical education now is recommended for all endoscopic training, it may be more critical in complex endoscopic procedures requiring multiple cognitive and technical skills. Despite this, data supporting its use remains scarce. Results of this prospective multicenter study evaluating learning curves in AETs using a standardized data collection tool and CUSUM analysis shows that a specific case load does not ensure competency in EUS. There is substantial variability in the number of EUS procedures performed during the training period and in the number of examinations required to cross the threshold for acceptable performance. Two trainees achieved competency at case numbers 225 and 245, 2 trainees showed a trend toward competency but required ongoing observation, and 8 trainees showed the need for ongoing training and observation. Similar results were noted in the assessment of learning curves for individual stations during the EUS examination. Because none of the trainees achieved competency at fewer than 225 cases, these results suggest that 225 cases should be the minimum case experience available in training programs. In a sensitivity analysis that assessed for competency using a less stringent definition of competency (median score, 1 or 2), similar results were noted.
Previous studies describing competency assessment tools in endoscopic procedures such as colonoscopy and endoscopic retrograde cholangiopancreatography have focused on a limited number of motor skills and frequently used single end points such as cecal intubation rates and bile duct cannulation rates with no procedure-related cognitive skill assessment. The competency assessment tool designed and used in this study accounted for all key relevant constituents of an EUS examination, grading technical and cognitive skills in a balanced manner, which is critical for determining competence. By showing marked variability in when trainees achieved competence, this study confirms the importance of competency-based medical education in therapeutic endoscopy.
Available data on EUS learning curves, especially in the identification of relevant EUS stations, are limited. In a pilot prospective study that used the same comprehensive EUS competency tool, we defined learning curves in EUS among 5 AETs using CUSUM analysis for each trainee’s overall performance and for each anatomic station. Two trainees crossed the threshold for acceptable performance, 2 trainees showed a trend toward acceptable performance, and 1 trainee showed the need for ongoing training.2 A survey based on expert opinion suggested that an average of 43, 44, and 120 EUS examinations are required to attain competency in the interpretation of esophageal, gastric, and pancreatic images, respectively.11 Learning curves for endosonographic staging of esophageal carcinoma and pancreatic cancer have been described (threshold for acceptable accuracy, 100).12,13
There were several limitations of this study. The main limitation was that the interpretive finding of the attending endosonographer was the gold standard, an inherent limitation of any study assessing learning curves using this methodology. In addition, endosonographers with multiple levels of experience were included in this study. Thus, although a portion of the variability in trainee performance may reflect variability in competence as defined by trainers, this was accounted for by the use of a standardized technique of grading trainees. The results of this study do not provide any data that support that a standardized approach to verbal instructions was performed at each site. In addition, interobserver and intra-observer variability among trainers with the use of the evaluation tool used in our pilot and current study was not performed. We accounted for this limitation by performing sensitivity analysis, varying the definition of competency (median score, 1 or 2), and noted no significant change in the proportion of trainees achieving competency. The possibility of spectrum bias cannot be excluded because various stages and grades of disease cases were included in the grading process. However, the purpose of this study was to assess overall competency in evaluating all stations during an EUS examination. The trainee was in control of which procedure was to be graded and by whom, introducing the potential for bias. This is unlikely to be relevant to this study because the majority of trainees showed the need for ongoing observation at the end of their training, according to our preset criteria.
The study protocol used a predefined cut-off value of 1 minute per station before instructions were provided by trainers and this may have influenced trainee scores and learning curves. It is possible that the proportion of trainees achieving competency may have increased if trainees were provided more than a minute per station. This is related to the outstanding issue that it is clear that trainees continue to improve after completion of training. In fact, it is possible that some trainees may not achieve our predefined measures of competency outside of independent practice. Thus, even less stringent criteria for competency may be required in endoscopic training programs. These issues require continued study in all endoscopic procedures.
The overall sample size was limited, especially to assess relevant quality indicators and metrics in EUS1 such as diagnostic yield of malignancy in patients with pancreatic masses undergoing EUS FNA. Similarly, this study did not assess learning curves for every aspect of an EUS examination such as appropriate TNM staging and achieving FNA. The learning curve for EUS-FNA has been studied previously; especially for solid pancreatic lesions. Investigators with prior experience in diagnostic EUS and availability of rapid on-site cytopathology evaluation have reported a learning curve with increasing sensitivity for the cytopathologic diagnosis of cancer (threshold of 80% after 20–30 EUS FNA) with a decreasing number of passes needed to obtain adequate results (median of 3 passes after 150 EUS-FNA procedures).5 The investigators acknowledge that linear EUS examinations may be more challenging than radial EUS for the early learner. Because the vast majority of grading examinations were performed using linear EUS (67%), we were unable to draw any conclusion on whether the type of echoendoscope used affected the attainment of competency. It is possible that the learning curves may be different if training is performed predominantly using radial EUS. The impact of EUS training being curtailed during the entire training period on competency attainment needs to be assessed in future studies.
Defining the learning curve during EUS training in a large multicenter prospective fashion using a structured tool that is easy to use and granular enough to develop specific assessments for the breadth of procedure skills and strong statistical methodology were strengths of this study. These results have led to the initiation of a large multicenter prospective study that will assess competency among AETs in a continuous fashion in EUS and endoscopic retrograde cholangiopancreatography using a novel tool with a comprehensive data collection and reporting system (clinicaltrials.gov NCT02247115). The goals of this study were multifold, as follows: (1) to facilitate the ability of training programs to evolve with the new ACGME/NAS reporting requirements; (2) to help program directors/trainers and trainees identify specific skill deficiencies in training and allow for tailored, individualized remediation; (3) to create on-demand detailed reports on how individual trainees are progressing compared with their peers nationwide; and (4) to establish reliable and generalizable standardized learning curves (milestones) and competency benchmarks that national GI societies and training programs can use to develop credentialing guidelines.
In summary, we can draw 2 main conclusions from the results of this study. The current guidelines of performing 150 EUS examinations are inadequate to achieve competence in EUS. Because none of the trainees achieved competency at fewer than 225 cases in this study, 225 cases should be considered the minimum case load in training programs. Finally, individuals in training in EUS acquire skills at different rates and emphasis needs to be shifted away from the number of procedures performed to performance metrics with defined and validated competency thresholds of performance. This will ensure that trainees have attained the skills required for safe and effective unsupervised practice, which in turn will positively impact the quality of patient care and outcomes.
Supplementary Material
Acknowledgments
The authors acknowledge the efforts of the advanced endoscopy trainees who participated in this study.
Abbreviations used in this paper
- ACGME
Accreditation Council for Graduate Medical Education
- AET
advanced endoscopy trainee
- CBME
competency-based medical education
- CUSUM
cumulative sum analysis
- EUS
endoscopic ultrasound
- FNA
fine-needle aspiration
- GI
gastrointestinal
- NAS
Next Accreditation System
Footnotes
Note: To access the supplementary material accompanying this article, visit the online version of Clinical Gastroenterology and Hepatology at www.cghjournal.org, and at http://dx.doi.org/10.1016/j.cgh.2014.11.008.
Results of this study were presented in part as an oral presentation at Digestive Disease Week May 3–6, 2014, Chicago, IL.
Conflicts of interest
The authors disclose no conflicts.
References
- 1.Wani S, Wallace MB, Cohen J, et al. Quality indicators for endoscopic ultrasonography. Gastrointest Endosc. 2014 doi: 10.1016/j.gie.2006.02.020. In press. [DOI] [PubMed] [Google Scholar]
- 2.Wani S, Cote GA, Keswani R, et al. Learning curves for EUS by using cumulative sum analysis: implications for American Society for Gastrointestinal Endoscopy recommendations for training. Gastrointest Endosc. 2013;77:558–565. doi: 10.1016/j.gie.2012.10.012. [DOI] [PubMed] [Google Scholar]
- 3.Eisen GM, Dominitz JA, Faigel DO, et al. Guidelines for credentialing and granting privileges for endoscopic ultrasound. Gastrointest Endosc. 2001;54:811–814. doi: 10.1016/s0016-5107(01)70082-x. [DOI] [PubMed] [Google Scholar]
- 4.Polkowski M, Larghi A, Weynand B, et al. Learning, techniques, and complications of endoscopic ultrasound (EUS)-guided sampling in gastroenterology: European Society of Gastrointestinal Endoscopy (ESGE) Technical Guideline. Endoscopy. 2012;44:190–206. doi: 10.1055/s-0031-1291543. [DOI] [PubMed] [Google Scholar]
- 5.Sarker SK, Albrani T, Zaman A, et al. Procedural performance in gastrointestinal endoscopy: live and simulated. World J Surg. 2010;34:1764–1770. doi: 10.1007/s00268-010-0579-0. [DOI] [PubMed] [Google Scholar]
- 6.Faigel DO. Economic realities of EUS in an academic practice. Gastrointest Endosc. 2007;65:287–289. doi: 10.1016/j.gie.2006.06.045. [DOI] [PubMed] [Google Scholar]
- 7.Azad JS, Verma D, Kapadia AS, et al. Can U.S. GI fellowship programs meet American Society for Gastrointestinal Endoscopy recommendations for training in EUS? A survey of U.S. GI fellowship program directors. Gastrointest Endosc. 2006;64:235–241. doi: 10.1016/j.gie.2006.04.041. [DOI] [PubMed] [Google Scholar]
- 8.Bolsin S, Colson M. The use of the Cusum technique in the assessment of trainee competence in new procedures. Int J Qual Health Care. 2000;12:433–438. doi: 10.1093/intqhc/12.5.433. [DOI] [PubMed] [Google Scholar]
- 9.Salowi MA, Choong YF, Goh PP, et al. CUSUM: a dynamic tool for monitoring competency in cataract surgery performance. Br J Ophthalmol. 2010;94:445–449. doi: 10.1136/bjo.2009.163063. [DOI] [PubMed] [Google Scholar]
- 10.Iobst WF, Caverzagie KJ. Milestones and competency-based medical education. Gastroenterology. 2013;145:921–924. doi: 10.1053/j.gastro.2013.09.029. [DOI] [PubMed] [Google Scholar]
- 11.Hoffman BJ, Hawes RH. Endoscopic ultrasound and clinical competence. Gastrointest Endosc Clin N Am. 1995;5:879–884. [PubMed] [Google Scholar]
- 12.Fockens P, Van den Brande JH, van Dullemen HM, et al. Endosonographic T-staging of esophageal carcinoma: a learning curve. Gastrointest Endosc. 1996;44:58–62. doi: 10.1016/s0016-5107(96)70230-4. [DOI] [PubMed] [Google Scholar]
- 13.Gress FG, Hawes RH, Savides TJ, et al. Role of EUS in the preoperative staging of pancreatic cancer: a large single-center experience. Gastrointest Endosc. 1999;50:786–791. doi: 10.1016/s0016-5107(99)70159-8. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.



