Abstract
Introduction
There is a lack of validated quality metrics to evaluate the care of patients receiving surgery for renal cell carcinoma (RCC). To address this, the Kidney Cancer Research Network of Canada defined a list of quality indicators (QI) to assess hospital-level performance. We have case-mix adjusted these QIs to benchmark RCC surgical care at Canadian academic centres.
Methods
The Canadian Kidney Cancer information system (CKCis) was used to measure six QIs: laparoscopic approach proportion (LA), partial nephrectomy proportion (PN), partial nephrectomy in patients with chronic kidney disease (CKDPN), positive margin rate (PMR), partial nephrectomy complication rate (PNCx), and warm ischemia time (WIT). To benchmark performance, indirect standardization (observed-to-expected ratio) methodology was employed using multivariate regression models.
Results
Multivariate models for LA, PN, and CKDPN demonstrated good discrimination and were used for benchmarking. National averages of 74% (70–78%), 73% (70–75%), and 70% (67–74%) for the LA, PN, and CKDPN QIs, respectively, were determined and used to benchmark individual hospital performance. Overall, three (23%), two (15%), and two (15%) hospitals performed below expected for LA, PN, and CKDPN, respectively. Hospital identity was an independent predictor of LA, PN, and CKDPN (p<0.001).
Conclusions
Significant variability between CKCis hospitals for three RCC surgical QIs exists. Using the CKCis infrastructure may provide a framework for institution-level audit feedback for quality improvement. Greater CKCis capture rates and further data supporting the construct validity of these QIs are required to extend the use of this dataset to real-world quality initiatives.
Introduction
Evaluating quality of care is increasingly important, as the Canadian healthcare system evolves towards a more patient-centred model with emphasis on healthcare provider transparency and accountability. The ability to determine the quality of healthcare being delivered is central to this evolution and has implications for educational initiatives, distribution of funds, and the regionalization of care. For assessments of healthcare quality to assist policymakers in making informed decisions, strict definitions and validated metrics must be developed. According to the Donabedian model of quality assessment, these metrics should encompass various structural, process, and outcome performance measures of patient care.1 Such metrics, or quality indicators (QI), have been successfully developed and employed to benchmark hospital-level performance for surgical care.2,3
The development of validated QIs for urological oncology has lagged behind other tumour sites, particularly breast and colorectal, where the majority of this work has been conducted.4–6 To address this knowledge gap in renal cell carcinoma (RCC), the Kidney Cancer Research Network of Canada (KCRNC) developed a comprehensive list of QIs through a modified Delphi method, spanning the spectrum of RCC care from localized to metastatic disease.7 Herein, we use the KCRNC QIs to benchmark hospital-level quality of care for localized RCC surgery at Canadian hospitals participating in the Canadian Kidney Cancer information system (CKCis), a national access-restricted database of RCC patients.
Methods
Data and population
We performed a cohort study using patient data entered in the CKCis. CKCis contains prospective data collected from January 2011 on patients with RCC from 16 tertiary referral Canadian hospitals in six provinces. Patients included in the database received treatment from 1988 onwards, with all data prior to 2011 being collected retrospectively. Data from patients with any stage of tumour and any form of treatment are entered, with vital status for all patients with localized disease being updated on an annual basis. Consent was obtained prior to data entry into CKCis for all patients. This study was approved by the University Health Network Research Ethics Board. All participating hospitals received review board approval prior to contributing to the CKCis database.
QIs
We identified QIs for the surgical management of localized RCC using a modified Delphi method approach.7 These included the proportion of patients: 1) undergoing laparoscopic radical nephrectomy for T1–2 tumours (LA); 2) undergoing partial nephrectomy for T1 tumours (partial nephrectomy [PN]); 3) with risk factors for chronic kidney disease (CKD) undergoing partial nephrectomy (CKDPN) for T1 tumours, including those with hypertension, diabetes, or pre-existing CKD; 4) with a positive surgical margin (PMR) after partial nephrectomy for T1 tumours; 5) with a surgical complication following partial nephrectomy for T1 tumours (PNCx); and 6) the mean warm ischemia time (WIT) for patients undergoing partial nephrectomy for T1 tumours. For all QIs, we considered only patients who were metastasis-free at the time of surgery.
Statistical analysis
For QI benchmarking, our analysis was restricted to data from 2010 onward to measure contemporary trends. For fitting the case-mix adjustment models, data from 2008–2015 was included to increase the overall number of patients and allow for more stable estimates. The univariate associations with each QI and all the case-mix variables (gender, age at nephrectomy, calendar year, surgical approach where relevant, pathological T-stage, lymph node involvement, tumour grade, tumour histology, size of the largest tumour, number of tumours found, multifocality, previous kidney cancer, family history of kidney cancer, smoking status, number of comorbid conditions, hypertension, diabetes, end-stage renal disease [ESRD], and body mass index [BMI]) were evaluated through logistic regression (LA, PN, CKDPN, PMR, PNCx) or linear regression (WIT) models. Case-mix variables with significant likelihood ratio test p value at 5% significance level were selected to the multivariate risk adjustment logistic regression model (linear model for WIT), with the exception of gender, age, and calendar year, which were included in all models. The discrimination and calibration of the logistic models were examined through receiver operating characteristic (ROC) curves, areas under the curve (AUC), and calibration plots.
Based on the case-mix adjusted models, for each QI at each hospital we calculated an observed to expected (O/E) ratio, where E refers to the expected outcome at the national average level of care, adjusting for case mix. We included 13 hospitals that had at least one patient fulfilling the inclusion criteria for all six QIs. To incorporate model uncertainty, we calculated 95% confidence intervals (CIs) for the O/E ratios using the bootstrap method.8 Based on the CIs, each facility was classified either as a lower outlier (CI entirely below one), upper outlier (CI entirely above one), or non-outlier (CI overlapping one). In addition to the relative performance, for representing the absolute level of performance we multiplied the O/E ratio for each hospital by the national average performance to determine the case-mix adjusted QI proportion.
Results
Case-mix adjusted QI models
The number of patients included in the analysis of each QI and their corresponding baseline comorbidity, tumour and treatment characteristics are summarized in Table 1. Thirteen hospitals were included in each QI analysis. The developed case-mix adjusted models performed with AUC values of 0.734, 0.846, 0.848, 0.638, and 0.609 for LA, PN, CKDPN, PNCx, and PMR, respectively (Fig. 1). The WIT model had an R2 value of 0.11. The predictive models for PNCx and PMR showed poor discrimination, which we suspect is in part due to reporting issues in the database; two sites had reported no complications and three sites had reported no positive margins. Thus, we did not include these two QIs in further analyses. Furthermore, as the site-specific numbers for WIT were small, we also omitted this QI from subsequent analyses. Final patient, tumour, and treatment characteristics included in each QI case-mix adjusted model are summarized in Fig. 2. In the multivariate model, tumour size, number of comorbidities, and the presence of ESRD were significant predictors of LA, with tumour stage nearing significance. Tumour size, multifocality, and the presence of ESRD were significant predictors of PN and CKDPN. Year of treatment (2013 vs. 2008), as well as histology (papillary vs. clear-cell) were also found to predict PN. We also fitted the same models adding hospital identity, which was an independent predictor of LA, PN, and CKDPN (AUC values including hospital identity vs. without; PN: 0.869 vs. 0.846; LA: 0.859 vs. 0.734; CKDPN: 0.87 vs. 0.848; p≤0.001 in each case).
Table 1.
Study cohort characteristics
| PN | LA | CKDPN | PMR | PNCx | WIT | |
|---|---|---|---|---|---|---|
|
|
||||||
| Total (%) | Total (%) | Total (%) | Total (%) | Total (%) | Total (%) | |
| n* | 1323 | 513 | 683 | 942 | 962 | 301 |
| Age | ||||||
| <50 | 290 (22) | 118 (23) | 73 (11) | 212 (23) | 217 (23) | 71 (24) |
| 50–60 | 377 (28) | 132 (26) | 181 (27) | 283 (30) | 289 (30) | 93 (31) |
| 60–70 | 393 (30) | 128 (25) | 254 (37) | 294 (31) | 298 (31) | 82 (27) |
| 70–80 | 223 (17) | 106 (20) | 148 (22) | 135 (14) | 140 (15) | 50 (17) |
| >80 | 40 (3) | 29 (6) | 27 (4) | 18 (2) | 18 (2) | 5 (2) |
| Gender | ||||||
| Male | 824 (62) | 318 (62) | 436 (64) | 587 (62) | 603 (63) | 180 (60) |
| Comorbidities | ||||||
| Mean no. (range) | 2.76 (0–17) | 2.9 (0–17) | 4.1 (1–17) | 2.6 (0–13) | 2.6 (0–13) | 2.9 (0–12) |
| ESRD | 20 (2) | 18 (4) | 18 (3) | 3 (<1) | 3 (<1) | 0 (0) |
| DM | 237 (18) | 92 (18) | 237 (35) | 164 (17) | 165 (17) | 44 (15) |
| HTN | 626 (47) | 262 (51) | 626 (92) | 423 (45) | 432 (45) | 136 (45) |
| Smoker | 193 (15) | 66 (13) | 96 (14) | 139 (15) | 144 (15) | 53 (18) |
| Prior kidney cancer | 23 (2) | 5 (1) | 14 (2) | 18 (2) | 19 (2) | 6 (2) |
| Tumour stage | ||||||
| T1 (not specified) | 48 (4) | 12 (2) | 14 (2) | 35 (4) | 36 (4) | 8 (3) |
| 1A | 846 (64) | 148 (29) | 424 (62) | 681 (72) | 698 (73) | 242 (80) |
| 1B | 429 (32) | 201 (39) | 245 (36) | 226 (24) | 228 (24) | 51 (17) |
| T2 (not specified) | N/A | 46 (9) | N/A | N/A | N/A | N/A |
| 2A | N/A | 66 (13) | N/A | N/A | N/A | N/A |
| 2B | N/A | 40 (8) | N/A | N/A | N/A | N/A |
| Lymph node | ||||||
| Positive | 3 (<1) | 8 (2) | 1 (<1) | 1 (<1) | 1 (<1) | 0 (0) |
| Histology | ||||||
| Clear-cell | 899 (68) | 349 (68) | 480 (70) | 624 (66) | 637 (66) | 197 (65) |
| Papillary | 231 (17) | 75 (15) | 111 (16) | 177 (19) | 182 (19) | 64 (21) |
| Chromophobe | 86 (7) | 55 (11) | 37 (5) | 54 (6) | 55 (6) | 19 (6) |
| Other | 107 (8) | 34 (7) | 55 (8) | 87 (9) | 88 (9) | 21 (7) |
| Mean tumour size, cm (range) | 3.5 (0.6–7.0) | 6.0 (0.8–24.0) | 3.7 (0.6–7.0) | 3.2 (0.6–7.0) | 3.2 (0.6–7.0) | 3.0 (0.8–7.0) |
| Tumour grade | ||||||
| G1/2 | 821 (62) | 266 (52) | 431 (63) | 603 (64) | 612 (64) | 196 (65) |
| G3/4 | 369 (28) | 189 (37) | 196 (29) | 245 (26) | 253 (26) | 70 (23) |
| GX | 133 (10) | 58 (11) | 56 (8) | 94 (10) | 97 (10) | 35 (12) |
| No. of tumours removed | ||||||
| 1 | 1240 (94) | 473 (92) | 633 (93) | 897 (95) | 911 (95) | 287 (95) |
| 2 | 57 (4) | 32 (6) | 33 (5) | 29 (3) | 31 (3) | 11 (4) |
| ≥3 | 26 (2) | 8 (2) | 17 (3) | 16 (2) | 20 (2) | 3 (1) |
| Era | ||||||
| 2010 | 164 (12) | 74 (14) | 85 (12) | 114 (12) | 116 (12) | 44 (15) |
| 2011 | 248 (19) | 106 (21) | 134 (20) | 170 (18) | 173 (18) | 60 (20) |
| 2012 | 311 (24) | 123 (24) | 172 (25) | 216 (23) | 221 (23) | 83 (28) |
| 2013 | 294 (22) | 114 (22) | 146 (21) | 214 (23) | 218 (23) | 66 (22) |
| 2014–2015# | 306 (23) | 96 (19) | 146 (21) | 228 (24) | 234 (25) | 48 (16) |
| No. hospitals | 13 | 13 | 13 | 13 | 13 | 13 |
Denotes number of patients included in analysis for the indicated quality indicator;
data for 2015 is incomplete and shown together with 2014.
CKDPN: partial nephrectomy in patients with chronic kidney disease; DM: diabetes mellitus; ESRD: end-stage renal disease; HTN: hypertension; LA: laparoscopic approach; PMR: positive margin rate; PN: partial nephrectomy; PNCx: partial nephrectomy complication rate; WIT: warm ischemia time.
Fig. 1.

Case-mix adjusted quality indicator (QI) model discrimination. Receiver operating characteristic curves (ROC) and associated area under the curve (AUC) values for laparoscopic approach (LA), partial nephrectomy (PN), partial nephrectomy in patients with chronic kidney disease (CKDPN), partial nephrectomy complication rate (PNCx), and positive margin rate (PMR) QIs. Note, the warm ischemia time (WIT) QI is not included, as this is a continuous variable.
Fig. 2.

Multivariable quality indicator (QI) model case-mix variables. Patient, tumour, and treatment-related variables included in multivariable case-mix models for the laparoscopic approach (LA), partial nephrectomy (PN), partial nephrectomy in patients with chronic kidney disease (CKDPN) QIs with calculated odds ratios (OR) and 95% confidence intervals (CI) reported.
Benchmarking hospital quality
CKCis hospitals were benchmarked for quality of RCC surgical care using the LA, PN, and CKDPN case-mix adjusted QI models. For each hospital, the observed value of a given QI was divided by the model predicted expected value to generate O/E ratios. Hospitals performing with an O/E 95% CI above 1 were considered high outliers (better than expected), whereas those with an O/E 95% CI below 1 being low outliers (worse than expected) (Fig. 3). For LA, PN, and CKDPN, a total of four, four, and two hospitals were identified as high outliers, whereas three, two, and two were low, respectively.
Fig. 3.

Identification of outlier hospitals through observed-to-expected methodology. Caterpillar plots displaying O/E ratios with 95% confidence intervals (CI) for the laparoscopic approach (LA), partial nephrectomy (PN), partial nephrectomy in patients with chronic kidney disease (CKDPN) quality indicators (QIs) across Canadian Kidney Cancer information system (CKCis) hospitals. Hospitals are classified as a lower outlier (CI entirely below one), upper outlier (CI entirely above one), or non-outlier (CI overlapping one).
For more clinically intuitive comparisons of hospital performance, we multiplied the O/E ratio derived above by the national average performance to obtain case-mix adjusted proportions for each hospital for a given QI. These results are displayed in Fig. 4 and highlight individual hospital performance in relation to our data-established benchmark; that is, the national average performance. Overall, across CKCis hospitals the average proportions (with 95% CI) for LA, PN, and CKDPN observed were 74% (70–78%), 73% (70–75%), and 70% (67–74%), respectively.
Fig. 4.

Benchmarking Canadian Kidney Cancer information system (CKCis) hospital quality of care performance. Hospital O/E ratios (depicted in Fig. 3) for the laparoscopic approach (LA), partial nephrectomy (PN), partial nephrectomy in patients with chronic kidney disease (CKDPN) were transformed into case-mix adjusted proportions through multiplication by the national average performance and are displayed as a caterpillar plot. The overall national average performance (with 95% confidence interval [CI]) is depicted by the vertical grey line for each quality indicator (QI).
Discussion
Significant effort is being focused on defining strict measures by which hospitals and healthcare practitioners can be evaluated to assess the quality of care they provide. Importantly, such QIs must account for provider differences in case-mix variation in order to benchmark performance in an objective and accurate manner.9 While hospital enrolment in quality initiatives, such as the American College of Surgeons National Surgical Quality Indicator Program (ACS-NSQIP), allow for this, these programs fail to capture the specific care processes involved in the management of RCC given their strict focus on outcome-based QIs, such as in-hospital mortality.10 To address this, the KCRNC developed a panel of process-focused QIs that span the spectrum of RCC care.7 These process QIs define metrics that directly capture real-world care a patient receives and may be more actionable compared with outcome- or structure-based QIs. We have developed case-mix adjusted statistical models to benchmark hospital performance using the six KCRNC developed QIs for localized RCC.
Our analysis used the indirect standardization (O/E ratio) methodology to benchmark hospital performance against the nationwide average performance.3 This differs from other published national QI benchmarking strategies for localized RCC in which pre-set target values were established by expert opinion and unadjusted QI values are compared against this reference.11 Indeed, data from the National Swedish Kidney Cancer Registry demonstrated hospitals significantly underperformed for PN, with values of 22% and 56% in 2005 and 2013, respectively, against a target of >80%.11 While such an approach affords benchmarking, variability in case-mix between hospitals, including tumour-, patient-, and treatment-related factors, was not accounted for, resulting in biased comparisons of hospital performance. To circumvent these issues, we developed case-mix adjusted models for three QIs (PN, LA, CKDPN), which all displayed good discrimination and allowed us to benchmark hospital performance with less bias. Moreover, our approach further identified those QIs that could not be modeled with good discrimination and are not feasible for use as benchmarking tools using CKCis data.
We observed that hospital identity was an independent predictor of PN, LA, and CDKPN, highlighting the presence of interhospital variation across these QIs and the ability to capture differences in care delivery. This is particularly important for low-stage localized RCC, where event rates for many proposed outcome-based QIs are low, preventing them from capturing interhospital variability, as evidenced by previous reports investigating thromboembolic events, readmission rates, and in-hospital mortality following radical nephrectomy.12
In addition to benchmarking hospital performance, ideal QIs must further associate with other known structural, process, or outcome measures of quality in order to demonstrate construct validity.1,4 In the CKCis database, robust statistical analyses of these associations is challenging due to the small number of hospitals included. As such, future studies employing large-population databases will be required to determine whether poor performance on these case-mix adjusted QIs is associated with inferior patient outcomes, including postoperative complications and mortality.
This work has important limitations. First, as our expected rates of a quality outcome are based on the individual patient level data for a given hospital, incomplete capture may limit the generalizability of the results. Not all patients are included for each institution and there may be differences in the care provided to patients included in the database compared to those that were not included. Improving capture rates within CKCis will allow a more accurate assessment of institutional variability, and as the database is updated and improved, additional assessments of quality performance should be updated and reported. Second, as CKCis includes mostly academic-affiliated hospitals, our results may not be generalizable to the community setting, where a large number of RCC surgical cases are performed. Third, our data is in part retrospective. Fourth, while we were able to benchmark hospital performance, quality-outcome associations could not be reliably assessed due to limited number of outcome events (i.e., disease progression and death). Lastly, our case-mix adjusted models did not include certain tumour variables that are not captured in CKCis and are associated with surgical complexity and may represent unmeasured confounders, such as tumour depth, endophycity, or collecting system involvement.13
Conclusion
We have developed case-mix adjusted models of LA, PN, and CKDPN that can be used to benchmark localized RCC quality of care delivery on a hospital level. CKCis hospitals display significant variability in care, as determined by the LA, PN, and CKDPN QIs, with a minority of hospitals performing worse than expected. Greater CKCis capture rates and further data to support the construct validity of PN, LA, and CKDPN are required to extend the use of this dataset to real-world quality initiatives.
Footnotes
Competing interests: Dr. Lavallée has been an advisor for Ferring and Sanofi; and received a grant from Sanofi. Dr. Wood has received financial compensation from Astellas, BMS, Pfizer and Novartis. Dr. Jewett has been an advisor for and received honoraria from Pfizer; and holds shares in Theralase Therapeutics. Dr. Kapoor has been an advisor and speaker for, and has participated in clinical trials supported by Amgen, Astellas, GSK, Janssen, Novartis, Pfizer, and Sanofi. Dr. Tanguay has been an advisor for Pfizer and has received a travel grant from Sanofi. Dr. Moore has been an advisor for Janssen and a speaker for GSK. Dr. Rendon has been an advisor and speaker for Amgen, Astellas, Ferring, and Janssen. Dr. Pouliot has been an advisor for Amgen, Astellas, and Pfizer; a speaker for Sanofi; and has received financial compensation/grants/honoraria from Amgen, Astellas, Astra Zeneca, Janssen, Pfizer, and Sanofi. Dr. Black has been an advisor for Abbvie, Amgen, Astellas, Biocancell, Cubist, Janssen, Novartis, and Sitka; a speaker for Abbvie, Janssen, Ferring, Novartis, and Red Leaf Medical; has received grants/honoraria from Pendopharm; has participated in clinical trials supported by Amgen, Astellas, Ferring, Janssen, and Roche; and has received research funding from GenomeDx, iProgen, Lilly, and New B Innovation. Dr. Kawakami has received travel grants from Baxter and Pentopharm. Dr. Drachenberg has been an advisor for Astellas and Janssen; a speaker for Actavis (formerly Watson) and Amgen; and has participated in clinical trials run by Cancer Care Manitoba (CCMB). Dr. Finelli has been a consultant for Abbvie, Amgen, Astellas, Bayer, Janssen, Roche, and Sanofi. The remaining authors report no competing personal or financial interests.
This paper has been peer-reviewed.
References
- 1.Donabedien A. The quality of care. How can it be assessed? JAMA. 1988;260:1743–8. doi: 10.1001/jama.260.12.1743. https://doi.org/10.1001/jama.1988.03410120089033. [DOI] [PubMed] [Google Scholar]
- 2.Birkmeyer JD, Dimick JB, Birkmeyer N. Measuring the quality of surgical care — structure, process, or outcomes? J Am Coll Surg. 2004;198:626–32. doi: 10.1016/j.jamcollsurg.2003.11.017. https://doi.org/10.1016/j.jamcollsurg.2003.11.017. [DOI] [PubMed] [Google Scholar]
- 3.Russell MC, You YN, Hu CY, et al. A novel risk-adjusted nomogram for rectal cancer surgery outcomes. JAMA Surg. 2013;148:769–78. doi: 10.1001/jamasurg.2013.2136. https://doi.org/10.1001/jamasurg.2013.2136. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Massarweh NN, Hu CY, You YN, et al. Risk-adjusted pathological margin positivity rate as a quality indicator in rectal cancer surgery. JCO. 2014;32:2967–74. doi: 10.1200/JCO.2014.55.5334. https://doi.org/10.1200/JCO.2014.55.5334. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Gooiker GA, Kolfschoten NE, Bastiaannet E, et al. Evaluating the validity of quality indicators for colorectal cancer care. J Surg Oncol. 2013;108:465–71. doi: 10.1002/jso.23420. https://doi.org/10.1002/jso.23420. [DOI] [PubMed] [Google Scholar]
- 6.Hand R, Sener S, Imperato J, et al. Hospital variables associated with quality of care for breast cancer patients. JAMA. 1991;266:3429–32. https://doi.org/10.1001/jama.1991.03470240051031. [PubMed] [Google Scholar]
- 7.Wood L, Bjarnason GA, Black PC, et al. Using the Delpho technique to improve clinical outcomes through the development of quality indicators in renal cell carcinoma. J Oncol Pract. 2013;9:e262–7. doi: 10.1200/JOP.2012.000870. https://doi.org/10.1200/JOP.2012.000870. [DOI] [PubMed] [Google Scholar]
- 8.Faris PD, Ghali WA, Brant R. Bias in estimates of confidence intervals for health outcome report cards. J Clin Epidemiol. 2003;56:553–8. doi: 10.1016/s0895-4356(03)00048-9. https://doi.org/10.1016/S0895-4356(03)00048-9. [DOI] [PubMed] [Google Scholar]
- 9.Henneman D, van Bommel AC, Snijders A, et al. Ranking and rankability of hospital postoperative mortality rates in colorectal cancer surgery. Ann Surg. 2014;259:844–9. doi: 10.1097/SLA.0000000000000561. https://doi.org/10.1097/SLA.0000000000000561. [DOI] [PubMed] [Google Scholar]
- 10.Etzioni DA, Wasif N, Dueck AC, et al. Association of hospital participation in a surgical outcomes monitoring program with inpatient complications and mortality. JAMA. 2015;313:505–11. doi: 10.1001/jama.2015.90. https://doi.org/10.1001/jama.2015.90. [DOI] [PubMed] [Google Scholar]
- 11.Thorstenson A, Harmenberg U, Lindblad P, et al. Impact of quality indicators on adherence to National and European guidelines for renal cell carcinoma. Scand J Urol. 2016;50:2–8. doi: 10.3109/21681805.2015.1059882. https://doi.org/10.3109/21681805.2015.1059882. [DOI] [PubMed] [Google Scholar]
- 12.Gore JL, Wright JL, Daratha KB, et al. Hospital-level variation in the quality of urological cancer surgery. Cancer. 2012;118:987–96. doi: 10.1002/cncr.26373. https://doi.org/10.1002/cncr.26373. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Kutikov A, Uzzo RG. The R.E.N.A.L. nephrometry score: A comprehensive standardized system for quantitating renal tumour size, location, and depth. J Urol. 2009;182:844–53. doi: 10.1016/j.juro.2009.05.035. https://doi.org/10.1016/j.juro.2009.05.035. [DOI] [PubMed] [Google Scholar]
