Abstract
Introduction:
The taxonomy code(s) associated with each National Provider Identifier (NPI) entry should characterize the provider’s role (e.g., physician) and any specialization (e.g., orthopedic surgery). While the intent of the taxonomy system was to monitor medical appropriateness and the expertise of care provided, this system is now being used by researchers to identify providers and their practices. It is unknown how accurate the taxonomy codes are in describing a provider’s true specialization.
Methods:
Department websites of orthopedic surgery and general surgery from three large academic institutions were queried for practicing surgeons. The surgeon’s specialty and subspeciality information listed was compared to the provider’s taxonomy code(s) listed on the National Plan and Provider Enumeration System (NPPES). The match rate between these data sources was evaluated based on the specialty, subspecialty, and institution.
Results:
There were 295 surgeons (205 general surgery and 90 orthopedic surgery) and 24 relevant taxonomies (8 orthopedic and 16 general or plastic) for analysis. Of these, 294 surgeons (99%) selected their general specialty taxonomy correctly, while only 189 (64%) correctly chose an appropriate subspecialty. General surgeons correctly chose a subspecialty more often than orthopedic surgeons (70 versus 51%, P = 0.002). The institution did not affect either match rate, however there were some differences noted in subspecialty match rates inside individual departments.
Conclusions:
In these institutions, the NPI taxonomy is not accurate for describing a surgeon’s subspecialty or actual practice. Caution should be taken when utilizing this variable to describe a surgeon’s subspecialization as our findings might apply in other groups.
Keywords: Accuracy, National provider identifier, Surgeon, Taxonomy
Introduction
In 2004 the Center for Medicare and Medicaid Services (CMS) created the National Provider Identifier (NPI), a unique 10-digit number that was assigned to each health care provider to identify them in the Health Insurance Portability and Accountability Act (HIPAA) standard transactions (i.e., exchange of information between entities: claims, benefits, payments, etc.).1 At least one taxonomy code was also attached to the NPI to further characterize what the provider does (e.g., physician, physical therapist, nurse, etc.) as well as any specialization (e.g., general surgery, orthopedic surgery, internal medicine, etc.). The original intent of the taxonomy was to allow payers and agencies to evaluate medical appropriateness and also monitor the expertise or level of care rendered by the providers.2 Some investigators have utilized the NPI taxonomy to delineate specialization or practice focus when evaluating the composition or distribution of providers,3–6 the gender of providers7 and also for risk-adjusting outcomes,8,9 though this was not the intent of this system. When assessing patient outcomes in large datasets, provider specialty is one of the key variables that can be used along with patient demographics and medical comorbidities for appropriate risk adjustment. However, very little work has been done to evaluate the accuracy of the taxonomy in describing a provider’s scope of practice10 or means to improve this data.11
It is unclear if the NPI taxonomy is an accurate assessment of the provider’s true clinical practice or specialization and if this variable can be adequately used to discern a provider skillset for the risk adjustment of outcomes. Previous work has shown that surgical outcomes were better when surgeons had advanced training and/or had practices specializing in specific procedures.12–19 However, many of these studies were smaller in scope and they were able to utilize other means besides the NPI taxonomy to accurately discern a surgeon’s skillset and practice. However, when using large scale datasets (NTDB, Medicare, Optum) to examine questions on a population level, the NPI taxonomy is one of the few ways to assess a surgeon’s specialty or subspecialty. These large data sources do not contain specific data about providers that are more easily obtained when using smaller data sources. Subsequently, the accuracy of this provider information is key to being able to appropriately risk adjust outcomes, similar to the other smaller scope studies.
Because of these reasons, this study aimed to assess whether the NPI taxonomy found on the National Plan and Provider Enumeration System (NPPES) accurately described a surgeon’s specialization or subspecialization. We utilized publicly available information on department websites to ascertain a surgeon’s practice and their previous training and then compared that with what was contained on the NPPES registry. Our primary outcome was to assess the percentage of concordance between these two data sources for the provider’s specialty and subspeciality. Our secondary outcome was to examine any factors that might contribute to a lower agreement rate. Our hypotheses were that the accuracy of the NPI taxonomy is low and that there would be no identifiable factors that affected this finding.
Methods
Data was obtained by querying the publicly accessible websites of three orthopedic surgery departments and three general surgery departments (Detroit Medical Center/Wayne State University, Henry Ford Health System, and Michigan Medicine | University of Michigan Health). These three institutions were chosen because they were known to the authors, and thus there was some understanding of the nuances of these practice environments to facilitate the data collection. For the Henry Ford Health System, the “Find A Doctor” search feature was utilized to identify surgeons using the pull-down option of either “orthopedic surgery” or “surgery.” At the other institutions, the department webpages were simply accessed to identify those relevant surgeons. The data was extracted by the primary author over a 2-wk period (8/1/21 – 8/15/21). Data elements included: surgeon’s first and last name, affiliated institution, primary specialty (orthopedic surgery or general surgery) and any applicable subspecialty (e.g., trauma, adult reconstruction, vascular, transplant, etc.). As there were some variations in how certain subspecialties were described on each website, similar ones were combined together into a larger group based on an implied area of practice (i.e., thoracic, cardiothoracic and cardiovascular = “cardiothoracic”). It was also felt that this combination was appropriate as the NPPES taxonomy description for this group is “Cardiothoracic Vascular Surgery.” (Table 1). Information was obtained from a simple search of the websites to populate these data elements. Only information available on these websites was utilized and any personal knowledge of these surgeon’s actual practice was ignored so as not to bias the results.
Table 1 –
Department Subspecialties from Websites (including associated combinations).
Department - Subspecialty | Surgeons |
---|---|
Orthopedic surgery | |
Sports | 19 |
Adult reconstruction | 18 |
Pediatric orthopedic | 12 |
Trauma | 11 |
Hand | 9 |
Spine | 8 |
Foot & ankle | 5 |
General orthopedic | 4 |
Oncology | 3 |
Shoulder/Elbow | 1 |
Total | 90 |
General surgery | |
Acute care surgery (trauma, Critical care) | 37 |
Plastic surgery | 33 |
Oncology (Breast) | 20 |
Vascular | 19 |
Transplant | 18 |
Cardiothoracic (thoracic, cardiovascular) | 17 |
Pediatric surgery | 16 |
Colorectal | 14 |
General surgery | 12 |
Bariatric/Minimally Invasive | 12 |
Endocrine | 4 |
Hepatobilliary | 2 |
Endoscopic | 1 |
Total | 205 |
Inclusion criteria was being a provider listed on the website either in the department of orthopedic surgery or general surgery. Providers were excluded if they had a practice not consistent with either specialty (i.e., oral surgeons) or did not have a clinically operative practice (i.e., research appointment, nonoperative, legal counsel, emeritus status or remotely located adjunct). Any providers appearing on the websites of two institutions due to an overlap of academic appointments were assigned to a department based on their group practice affiliation (e.g., Henry Ford Health System employed surgeon holding a Wayne State University academic appointment). The surgeons’ NPI taxonomies were obtained by querying the records search feature on the (NPPES) utilizing the first and last names to correctly identify the surgeon. In the event that a surgeon shared a first and last name with another provider, a closer examination of specialty and location of practice was examined to identify the correct surgeon. All taxonomy codes (primary, secondary, and tertiary) were recorded if available as well as the chosen state of practice. Relevant NPI taxonomy codes are listed based on the associated specialty (Table 2).
Table 2 –
National plan and provider enumeration system (NPPES) taxonomies.
Provider taxonomy code | Provider taxonomy descriptions – Orthopedic surgery |
---|---|
207X00000X | Allopathic & osteopathic physicians/Orthopedic surgery |
207XS0114X | Allopathic & osteopathic physicians/Orthopedic surgery, adult reconstructive orthopedic surgery |
207XX0004X | Allopathic & osteopathic physicians/Orthopedic surgery, foot and ankle surgery |
207XS0106X | Allopathic & osteopathic physicians/Orthopedic surgery, hand surgery |
207XS0117X | Allopathic & osteopathic physicians/Orthopedic surgery, orthopedic surgery of the Spine |
207XX0801X | Allopathic & osteopathic physicians/Orthopedic surgery, orthopedic trauma |
207XP3100X | Allopathic & osteopathic physicians/Orthopedic surgery, pediatric orthopedic surgery |
207XX0005X | Allopathic & osteopathic physicians/Orthopedic surgery, sports medicine |
Provider taxonomy descriptions – General and plastic surgery | |
2,086,000,00X | Allopathic & osteopathic physicians/surgery |
2086H0002X | Allopathic & osteopathic physicians/surgery/hospice and palliative medicine |
2086S0120X | Allopathic & osteopathic physicians/surgery/pediatric surgery |
2086S0122X | Allopathic & osteopathic physicians/surgery/plastic and reconstructive surgery |
2086S0105X | Allopathic & osteopathic physicians/surgery/surgery of the hand |
2086S0102X | Allopathic & osteopathic physicians/surgery/surgical critical care |
2086X0206X | Allopathic & osteopathic physicians/surgery/surgical oncology |
2086S0127X | Allopathic & osteopathic physicians/surgery/trauma surgery |
2086S0129X | Allopathic & osteopathic physicians/surgery/vascular surgery |
208G00000X | Allopathic & osteopathic physicians/thoracic surgery (cardiothoracic vascular surgery) |
204F00000X | Allopathic & osteopathic physicians/transplant surgery |
208C00000X | Allopathic & osteopathic physicians/colon & rectal surgery |
2,082,000,00X | Allopathic & osteopathic physicians/plastic surgery |
2082S0099X | Allopathic & osteopathic physicians/plastic surgery, plastic surgery within the head and neck |
2082S0105X | Allopathic & osteopathic physicians/plastic surgery, surgery of the hand |
The surgeon’s primary specialty (orthopedic surgery or general surgery) (Table 1) as well as any subspecialty designated on the department website was then compared for agreement to the taxonomies listed on the surgeon’s NPI record. The match was deemed to be successful if the department website listed specialty or subspecialty matched on any of the taxonomies listed on the NPI record. A specialty match was also successful if the surgeon only selected a subspecialty, but that subspecialty was under the broader umbrella of the primary specialty (e.g., orthopedic trauma = orthopedic surgery). Plastic surgeons were deemed to have a successful specialty match if they selected either surgery or plastic surgery as their specialty, as some of these providers initially completed a general surgery residency before their plastic surgery specialization. However, they would only have a successful subspecialty match if they chose a relevant plastic surgery taxonomy. Statistical analyses were performed using Stata MP, version 16 (StataCorp, College Station, TX). Statistical significance was defined as a P-value < 0.05. A chi-square analysis was performed to evaluate whether specialty or institution affected the agreement with NPI taxonomy, and an intragroup comparison was also performed to evaluate for any different match rates across subspecialties. This study was submitted to the University of Michigan Medical School Institutional Review Board and given a determination of “not regulated” status as this research only used publicly available data sets.
Results
From the department websites we recorded 13 general surgery subspecialties (including plastic surgery) and 10 orthopedic subspecialties after consolidating similar ones. We initially identified 365 providers across the six departments at the three institutions. After applying exclusion criteria, there were 295 surgeons for analysis (205 general surgery and 90 orthopedic surgery) (Fig.). There were 24 relevant taxonomies (8 orthopedic and 16 general/plastic) that would apply to these two groups. Of all surgeons, 294 (99%) selected a taxonomy consistent with their general specialty. Only one orthopedic surgeon listed a taxonomy of “Specialist - 1,744,000,00X.” When selecting an appropriate subspecialty taxonomy, 64% of all providers did this correctly with significantly more general surgeons doing this compared to orthopedic surgeons (70 versus 51%, P = 0.002) (Table 3).
Fig.
– Surgeon inclusion and exclusion flow diagram.
Table 3 –
Specialty and subspecialty match rates.
Specialty | Specialty match, n (%) | Subspecialty match, n (%) |
---|---|---|
Orthopedic surgery (n = 90) | 89 (99%) | 46 (51%)* |
General surgery (n = 205) | 205 (100%) | 143 (70%)* |
Total | 204 (99%) | 189 (64%) |
Statistical significance with a P value = 0.002.
When specifically examining the specialties, there was a significantly different subspecialty match rate amongst general surgeons based on their subspecialization (P < 0.001). For example, pediatric surgeons selected a correct taxonomy 100% of the time whereas transplant surgeons only did this 44% of the time. There was no significant difference in subspecialty match rates amongst the orthopedic surgeon group (P = 0.189). General orthopedic surgeons selected a correct taxonomy 100% of the time whereas adult reconstruction surgeons only did this 39% of the time;, however the difference was not statistically significant (Table 4). Importantly, there were four general surgery subspecialties (Bariatric/Minimally Invasive Surgery, Endocrine, Endoscopic, and Hepatobiliary) and two orthopedic surgery subspecialties (Shoulder/Elbow and Oncology) that did not align at all with any of the available taxonomies. Even after removing these subspecialties from the previous analysis, those results did not change.
Table 4 –
Subspecialty match rate by department.
Department - Subspecialty | Unmatched n, (%) | Matched n, (%) | P Value |
---|---|---|---|
Orthopedic surgery (n = 90) | 0.189 | ||
Sports (n = 19) | 11 (58%) | 8 (42%) | |
Adult reconstruction (n = 18) | 11 (61%) | 7 (39%) | |
Pediatric orthopedic (n = 12) | 4 (33%) | 8 (67%) | |
Trauma (n = 11) | 6 (55%) | 5 (45%) | |
Hand (n = 9) | 3 (33%) | 6 (67%) | |
Spine (n = 8) | 3 (37%) | 5 (63%) | |
Foot & Ankle (n = 5) | 2 (40%) | 3 (60%) | |
General orthopedic (n = 4) | 0 (0%) | 4 (100%) | |
Oncology (n = 3)* | 3 (100%) | 0 (0%) | |
Shoulder/Elbow (n = 1)* | 1 (100%) | 0 (0%) | |
Total | 44 (49%) | 46 (51%) | |
General surgery (n = 205) | <0.001 | ||
Acute care surgery (trauma, critical care) (n = 37) | 13 (35%) | 24 (65%) | |
Plastic surgery (n = 33) | 2 (6%) | 31 (94%) | |
Oncology (breast) (n = 20) | 11 (55%) | 9 (45%) | |
Vascular (n = 19) | 2 (11%) | 17 (89%) | |
Transplant (n = 18) | 10 56%) | 8 (44%) | |
Cardiothoracic (thoracic, cardiovascular) (n = 17) | 2 (12%) | 15 (88%) | |
Pediatric surgery (n = 16) | 0 (0%) | 16 (100%) | |
Colorectal (n = 14) | 2 (14%) | 12 (86%) | |
General surgery (n = 12) | 1 (8%) | 11 (92%) | |
Bariatric/Minimally invasive (n = 12)* | 12 (100%) | 0 (0%) | |
Endocrine (n = 4)* | 4 (100%) | 0 (0%) | |
Hepatobilliary (n = 2)* | 2 (100%) | 0 (0%) | |
Endoscopic (n = 1)* | 1 (100%) | 0 (0%) | |
Total | 62 (30%) | 143 (70%) |
No relevant taxonomy option.
The institution did not affect the specialty match rate as only one provider did not select orthopedic surgery as a taxonomy. The rates of subspecialty match were also not significantly different across institutions as a whole or when comparing similar specialties across institutions. However, inside some individual departments at certain institutions there were some significant differences in subspecialty match rates between the different sections (Table 5).
Table 5 –
Specialty and subspecialty match rates by institution.
Health system | Specialty match, n (%)* | Subspecialty match, n (%)* |
---|---|---|
Henry Ford health system (n = 98) | 97 (99%) | 58 (59%) |
Michigan medicine | University of Michigan (n = 127) | 127 (100%) | 90 (71%) |
Wayne state University/Detroit medical center (n = 70) | 70 (100%) | 41 (59%) |
NS.
Discussion
In this study, we found that the NPI taxonomy is not a reliable indicator of an orthopedic or general surgeon’s subspecialty at these three institutions. While this data marker did delineate the overall general specialty of the surgeon, it did not accurately describe any further subspecialization. More than two thirds of general surgeons selected a correct subspecialty, while just over half of orthopedic surgeons did the same. Among general surgeons, some subspecialties did a significantly better job of choosing their correct taxonomy compared to others, while in orthopedic surgery, there was no difference across the subspecialities. The institution where the surgeons practiced did not affect this subspecialty match rate, however, among some individual departments at specific institutions there was some internal variation of match rates across the sections.
It has been shown that provider expertise in the form of specialized training improves surgical outcomes in urological,19 gynecological,15 orthopedic13 and general surgery.14,16,18 Additionally, surgeons with focused or specialized practices such as pediatric12 or neurosurgical17 also positively impacts outcomes. The NPPES taxonomy should describe this specialization, in the form of both the surgeon’s practice focus and also any additional previous training. However, we have shown that roughly only half of orthopedic surgeons and slightly more than two-thirds of general surgeons in these institutions have a taxonomy consistent with their practice. It is not known if the general surgeons are more aware of this taxonomy, or if this was a random difference based on the limited population sampled.
There was no difference in subspecialty match rates between the different institutions as a whole. This could indicate that factors impacting this rate are equally distributed across larger organizations. However, this study identified significantly different subspecialty match rates inside some individual departments at certain institutions, possibly meaning that some sections or staff are more versed or cognizant of the NPPES taxonomy than others. For example, in one institution all pediatric surgeons correctly identified as such on their subspecialty taxonomy while only one-quarter of the transplant surgeons did so accordingly. It is unclear who might be responsible for entering this data element and further investigation should explore the reasons for these intradepartmental differences.
The NPPES taxonomy was developed to monitor and verify the appropriateness of care, namely the prescribing of certain medications or performing other treatments, not for conducting research. However, our findings should cause some concern in interpreting some prior studies that utilized NPI taxonomy such as that by Acuña et al. who examined gender parity in orthopedic surgery, specifically subspecialization.7 Also, Lim looked at Medicare utilization in gynecological oncology and while we did not study this specialty, it is unknown if the taxonomy accuracy would affect their findings as well.20 More importantly, as quality initiatives (TQIP, NSQIP, etc.) and health services research in general continues to grow and expand we will need to be able to utilize data elements like these to adequately risk adjust outcomes so that we can inform subsequent recommendations. For example, TQIP records the NPI of the initial treating surgeon, and thus it would be prudent to know if that the provider has any relevant subspecialty training. Also knowing the skillsets of other providers involved at the different stages of the patient’s care continuum would also be important so that we could better understand who is treating these patients and the subsequent outcomes.
There are limitations to this study. We only queried departments at three institutions and it is entirely possible that this sample isn’t representative of organizations or providers in other regions or in other types of practice environments. We also only chose to examine general surgery and orthopedic surgery departments due to the authors familiarity with these specialties; however, these results might not translate in other ones. These were all academic medical center departments and this might also not be representative of other types of institutions or departments, for example community, federal, rural, etc. The data obtained from the departmental webpages might not actually reflect the surgeon’s true training, subspecialty or practice and this information might not be current. There are also a limited number of available taxonomies and it is possible that some surgeons would have selected an appropriate one, however, none were available. Also, while this initial selection was due to the authors’ familiarity with these institutions, future studies to quantify the generalizability of these findings could examine the taxonomy match rate across a wider swathe of this county and more types of practice setups. We also chose a 2 wk window to capture this data as this would represent a snapshot of these providers at these institutions at that moment. A surgeon’s board certification status was not examined in this study as there are substantially fewer options available in both specialties than are available in the taxonomies and we did not wish to further limit the potential matches.
The strengths to this paper include the use of data from academic departments at large institutions. Because of this, it is likely that most of the surgeons in these groups have completed fellowship training and so they should have some aspect of practice subspecialization that could be identified. Also, choosing orthopedic, general and plastic surgery is a great exemplar for our question as there are 23 potential taxonomy choices, while some other specialties don’t offer a similar variety of taxonomy options (neurosurgery = 3; urology = 1). We were able to correctly find every practicing surgeon’s NPPES taxonomy record and there were no instances of two surgeons having the same name to confuse the results. Moving forward, we identified six subspecialties or practice areas that had no relevant taxonomy to choose. It should be noted that these subspecialties are accepted areas of practice and training inside the general specialties (e.g., Orthopedic Oncology, Bariatric Surgery, etc.) and yet these surgeons cannot delineate their true practice. We chose to include these in our analysis as they are bona fide subspecialties and are listed on the department websites. The reasoning behind this taxonomy gap should be explored, but more importantly this should be highlighted so as the intent of the taxonomy system stays relevant and accurate for use by the appropriate parties. Also, the findings in this paper should be used to inform physicians and administrative staff as a whole to help improve the accuracy of this data element.
Conclusions
In conclusion, the NPI taxonomy is not accurate for describing a surgeon’s subspecialty or actual practice at these three institutions. While no specific reason or trend for this finding was identified, the reasons behind this should be explored. Further, some subspecialties completely lacked an appropriate taxonomy and thus there is no current method to accurately identify these providers. Caution should be taken when utilizing this variable to risk adjusting a surgeon’s specialization in the analysis of claims data as our findings might apply to other groups of providers in the country. Further work is needed to discern the reasons behind these inaccuracies and efforts should be considered to educate physicians, payers and researchers on the importance of this issue.
Disclosure
Bryant W. Oliphant and this research are supported the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health under award number K23AR079565. He also receives salary support from Blue Cross Blue Shield of Michigan and Blue Care Network (a nonprofit mutual company) through their funding of the Michigan Trauma Quality Improvement Program. The company had no role in the study. There are no other conflicts to disclose. Naveen F. Sangji has no relevant financial interests or other conflicts of to disclose. Heather S. Dolman has no relevant financial interests or other conflicts of to disclose. John W. Scott has no relevant financial interests or other conflicts of to disclose. Mark R. Hemmila receives salary support from Blue Cross Blue Shield of Michigan and Blue Care Network (a nonprofit mutual company) through their funding of the Michigan Trauma Quality Improvement Program. The company had no role in the study. He also receives grant support from the Michigan Department of Health and Human Services. There are no other conflicts to disclose.
Footnotes
Meeting Presentation
This study was presented at the 17th annual Academic Surgical Congress, February 1–3, 2022, in Orlando, FL.
REFERENCES
- 1.NPI. What You Need to Know. Medicare Learning Network; 2021. MLN909434. [Google Scholar]
- 2.Find Your Taxonomy Code. https://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/Find-Your-Taxonomy-Code. Accessed February 15, 2022.
- 3.Lynch G, Nieto K, Puthenveettil S, et al. Attrition rates in neurosurgery residency: analysis of 1361 consecutive residents matched from 1990 to 1999. J Neurosurg. 2015;122:240–249. [DOI] [PubMed] [Google Scholar]
- 4.Coombs LA, Max W, Kolevska T, Tonner C, Stephens C. Nurse practitioners and physician assistants: an underestimated workforce for older adults with cancer. J Am Geriatr Soc. 2019;67:1489–1494. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Martin A, Chen ML, Chatterjee A, et al. Specialty classifications of physicians who provide neurocritical care in the United States. Neurocrit Care. 2019;30:177–184. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Shih Y-CT, Kim B, Halpern MT. Medicare Learning Network. Jco Oncol Pract. 2021;17:e1–e10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Acuña AJ, Sato EH, Jella TK, et al. How long will it take to reach gender parity in orthopaedic surgery in the United States? An analysis of the national provider identifier registry. Clin Orthop Relat Res. 2021;479:1179–1189. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Schwaitzberg SD, Scott DJ, Jones DB, et al. Threefold increased bile duct injury rate is associated with less surgeon experience in an insurance claims database. Surg Endosc. 2014;28:3068–3073. [DOI] [PubMed] [Google Scholar]
- 9.Carrubba AR, Osagiede O, Spaulding AC, et al. Variability between individual surgeons in route of hysterectomy for patients with endometrial cancer in Florida. Surg Oncol. 2019;31:55–60. [DOI] [PubMed] [Google Scholar]
- 10.Dubuque EM, Yingling ME, Allday RA. The misclassification of behavior analysts: how national provider identifiers (NPIs) fail to adequately capture the scope of the field. Behav Anal Pract. 2021;14:214–229. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Newgard C, Malveau S, Staudenmayer K, et al. Evaluating the use of existing data sources, probabilistic linkage, and multiple imputation to build population-based injury databases across phases of trauma care. Acad Emerg Med. 2012;19:469–480. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Rhee D, Papandria D, Yang J, et al. Comparison of pediatric surgical outcomes by the surgeon’s degree of specialization in children. J Pediatr Surg. 2013;48:1657–1663. [DOI] [PubMed] [Google Scholar]
- 13.Mabry SE, Cichos KH, McMurtrie JT, Pearson JM, McGwin G, Ghanem ES. Does surgeon fellowship training influence outcomes in hemiarthroplasty for femoral neck fracture? J Arthroplast. 2019;34:1980–1986. [DOI] [PubMed] [Google Scholar]
- 14.Kulaylat AS, Pappou E, Philp MM, et al. Emergent colon resections. Dis Colon Rectum. 2019;62:79–87. [DOI] [PubMed] [Google Scholar]
- 15.Clark NV, Gujral HS, Wright KN. Impact of a fellowship-trained minimally invasive gynecologic surgeon on patient outcomes. Jsls J Soc Laparoendosc Surg. 2017;21:1–6. e2017.00037. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Cataneo JL, Veilleux E, Lutfi R. Impact of fellowship training on surgical outcomes after appendectomies: a retrospective cohort study. Surg Endosc. 2021;35:4581–4584. [DOI] [PubMed] [Google Scholar]
- 17.McCutcheon BA, Hirshman BR, Gabel BC, et al. Impact of neurosurgeon specialization on patient outcomes for intracranial and spinal surgery: a retrospective analysis of the Nationwide Inpatient Sample 1998–2009. J Neurosurg. 2018;128:1578–1588. [DOI] [PubMed] [Google Scholar]
- 18.Johnston MJ, Singh P, Pucher PH, et al. Systematic review with meta-analysis of the impact of surgical fellowship training on patient outcomes. Br J Surg. 2015;102:1156–1166. [DOI] [PubMed] [Google Scholar]
- 19.Nayak JG, Drachenberg DE, Mau E, et al. The impact of fellowship training on pathological outcomes following radical prostatectomy: a population based analysis. Bmc Urol. 2014;14:1–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Lim SL, Puechl AM, Broadwater G, et al. Gender variation in Medicare utilization and payments in gynecologic oncology. Gynecol Oncol. 2019;154:602–607. [DOI] [PubMed] [Google Scholar]