Skip to main content
Canadian Urological Association Journal logoLink to Canadian Urological Association Journal
editorial
. 2017 Aug;11(8):223–224. doi: 10.5489/cuaj.4803

Benchmarking our urological care: It’s just the beginning

D Robert Siemens 1,, Christopher M Booth 2
PMCID: PMC5542823  PMID: 28798816

In this issue of CUAJ, readers will come across an interesting exercise in performance measurement focused on the surgical management of kidney cancer. Lawson et al1 report on a benchmarking project within the Canadian Kidney Cancer information system (CKCis), a mostly prospective database from a cohort of patients managed in 16 different tertiary care hospitals across Canada. The clinical-pathological variables collected within the dataset allow significant depth and breadth in this quality review of surgical care. The authors assessed each hospital’s performance on a number of previously defined quality indicators: processes of care, such as use of partial nephrectomy for localized tumours, as well as specific outcomes, including warm ischemia time and complications. In order to create benchmarks to compare “performance,” the authors used the rich variables in the dataset to control for case-mix and formulate indirect standardization of indicators (conceptually the average performance across the centres). Using this methodology, they demonstrate some variability in the observed-to-expected performance in participating hospitals, including the use of nephron-sparing techniques in those at higher risk of future renal impairment.

As evidenced by the recurring CUAJ series, “Business of Urology,” there is much we in clinical medicine have borrowed (and even more we still need to learn) from the corporate world. The concept of benchmarking — a now well-entrenched management approach for implementing best practices at best cost — is a perfect example. Although conceptualized in the early 1930s, benchmarking was first implemented by Xerox in the competitive photocopier industry at the end of the 1970s.2 At the time, the company found itself increasingly vulnerable to U.S. and Japanese competitors, resulting in a significant drop in market share. The company implemented a program entitled, “Leadership through Quality,” and specifically analyzed (or benchmarked) the features, quality, and costs of its products and those of its competitors. It was immediately apparent that its manufacturing costs were considerably higher: its competitor’s production costs were nearly 50% of those at Xerox. Over the years, this sort of competitive benchmarking has propagated throughout the industrial sector and, importantly, its methodologies have undergone multiple changes that are informative for adoption in healthcare.

Although first envisioned as a simple process of performance comparison between competitive organizations, benchmarking has morphed into a method for continuous quality improvement.2 Benchmarking is a dynamic process: delineating today’s best practices and then proposing optimal performance in the future. Benchmarking in healthcare is a comparatively recent innovation translated in the late 1990s to aid in the measurement of health system quality, generally as a management aid. Despite the compelling evidence referenced by most of our practice guidelines, it has been well-described that we deliver variable and often guideline-discordant care. As a response, clinical practice benchmarking represents a process by which individual groups of practitioners, hospitals, or regional authorities can compare and share best practice and facilitate continuous quality improvement. However, as one could imagine, the creation and implementation of benchmarks can be fraught with challenges, given the intrinsic complexity of defining quality care, let alone patient, provider, and societal preferences.

The two fundamental challenges to this process are: 1) selection of quality indicators; and 2) deriving a benchmark rate. Although most indicators of surgical care have some degree of face value, whether evidenced-based or defined by expert opinion, their reliability and validity to compare within and between providers is generally not well-studied. Furthermore, there is uncertainty and controversy regarding the optimal methodology to determine specific benchmarks. Criterion-based benchmarking (CBB) is one method for estimating the appropriate rate of the use of a specific therapy. CBB does not require detailed information about case mix and instead is informed after identifying barriers to access and then applying those to a subpopulation in order to benchmark an appropriate rate of that therapy. This method is currently used by Cancer Care Ontario for planning and monitoring of radiotherapy delivery in the province. The University of Alabama at Birmingham’s Achievable Benchmarks of Care (ABC) is another peer-group-based method for identifying benchmark performance and achieved by calculating the mean of the best care achieved for at least 10% of the population (the present paper from CKCis uses a conceptually similar, albeit more simplistic, process). All of these methods suffer from various drawbacks, including limitations of sample size for less common interventions. A fundamental limitation of the ABC method and the approach used by Lawson et al in this issue of CUAJ is the uncertainty about whether the top 10% or “above average” rates are in fact the optimal rate. Moreover, in setting benchmarks we need to take into account effects of random variation in the metrics we consider. Fundamentally, we would only want to focus on care that has been demonstrated to be associated with non-random variation across different institutions or regions, as only then would we be able to identify (and improve upon) the structures and processes that are responsible for the differences observed.

Despite the well-described benefits in the corporate world, the impact of integrating benchmarking to improve quality of healthcare has been much less obvious. Beyond the issues of setting of benchmarks described above (should targets be easily achievable or aspirational?), implementation is potentially even more of an issue, given time constraints, resistance to change, and other competitive priorities. A poorly executed benchmarking program has the potential to result in a waste of valuable resources, as well disengagement of the medical team. Moreover, if an artificially high benchmark rate is established using the ABC methodology, it can lead to potentially dangerous implications for systems that encourage providers to achieve these rates. Despite these limitations, benchmarking is a valuable technique for quickly elevating the performance of an organization; recent efforts, including those by the Canadian Urological Association,3 have highlighted the need to ensure uniform provision of high quality urological care. Benchmarking, as an integral component of a quality improvement process in its entirety, could optimally breed a voluntary collaboration among individual clinicians and institutions to create a spirit of competition and push the boundaries of best practice in urology.

References

  • 1.Lawson KA, Saarela O, Liu Z, et al. Benchmarking quality for renal cancer surgery: Canadian Kidney Cancer information system (CKCis) perspective. Can Urol Assoc J. 2017;11:232–7. doi: 10.5489/cuaj.4397. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ettorchi-Tardy A, Levif M, Michel P. Benchmarking: A method for continuous quality improvement in health. Healthcare Policy. 2012;7:e101–e119. doi: 10.12927/hcpol.2012.22872. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Kassouf W, Aprikian A, Black P, et al. Recommendations for the improvement of bladder cancer quality of care in Canada. Can Urol Assoc J. 2016;10:E46–80. doi: 10.5489/cuaj.3583. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Canadian Urological Association Journal are provided here courtesy of Canadian Urological Association

RESOURCES