Skip to main content
Canadian Journal of Surgery logoLink to Canadian Journal of Surgery
. 2013 Aug;56(4):224–226. doi: 10.1503/cjs.020712

Identification and use of operating room efficiency indicators: the problem of definition

Tamas Fixler *,, James G Wright
PMCID: PMC3728239  PMID: 23883490

To measure operating room (OR) performance and efficiency, hospitals need scorecards or dashboards displaying and tracking core performance indicators.13 Scorecards should be monitored on an ongoing basis and benchmarked both internally against performance over time and externally against established best practices with the intent of continuous performance improvement.1

One of the main challenges in developing a scorecard for measuring and monitoring OR performance is determining which indicators are most important for inclusion. Indicators ideally should consist of data already available in OR information systems (readily measurable), and qualitative measures, such as satisfaction surveys, should probably be avoided because most of them are not validated.2 However, there appears to be variation among hospitals in terms of which data elements and indicators are collected and analyzed,4,5 posing significant challenges for external benchmarking. Only with uniform data and indicator definitions can hospitals proceed with benchmarking and hopefully share learning, leading ultimately to best practice. Moreover, owing to the diverse group of stakeholders involved — surgeons, anesthesiologists, nurses and hospital administrators — it is often difficult to achieve consensus on which indicators are most important for measuring performance.1 For example, to a hospital administrator, optimal performance may mean minimum variance from the budget. To a surgeon, it may mean on-time starts, rapid turnovers and few cancellations.

The Canadian Paediatric Surgical Wait Times (CPSWT) Project, launched in 2007, is a national effort to measure wait times for children and youth in need of surgery at 15 pediatric academic health sciences centres from 8 Canadian provinces.6 The CPSWT Project provided a unique opportunity to collaboratively share knowledge across these participating hospitals to identify which performance indicators were most commonly used and viewed as most critical for measuring OR performance, particularly from a utilization and efficiency perspective. We asked the children’s hospitals to rank the most important indicators, and they identified the following 8 parameters: off-hours surgery, same-day cancellation rate, first case start-time accuracy, OR use, percentage of unplanned closures, case duration accuracy, average turnover time and excess staffing costs. However, common definitions among the hospitals were lacking, which was a critical consideration and important challenge for external benchmarking.

Off-hours surgery measures the volume or percentage of surgery performed outside of scheduled OR time during evenings, nights, weekends and holidays. Off-hours surgery may result from urgent/emergent cases or from normal over-run hours (i.e., surgery exceeding its scheduled time). Off-hours surgery should differentiate between these 2 scenarios, as they require different response strategies if deemed excessive.

Same-day cancellation rate measures the percentage of surgical procedures cancelled (i.e., rescheduled to another day or cancelled altogether) on the day of surgery. Some variability exists in terms of how the surveyed hospitals define “same-day” cancellations. In some cases, this refers only to cancellations on the day of surgery, whereas in other cases it refers to any cancellations after 12:00 pm or 1:00 pm the day before the scheduled day of surgery. Moreover, some hospitals capture only elective cancellations, whereas others capture all cancellations. All types of cancellations, including those cancelled the previous day, with explicit categories need to be captured.

First case start-time accuracy measures the percentage of first cases of the day that start on time. An on-time start is typically defined as the patient being in the OR at the scheduled start time; however, hospitals commonly allow a grace period for this indicator (i.e., the patient must be in the room within a certain number of minutes of the scheduled start time for the case to be considered on time). This grace period varies across surveyed hospitals and in the literature and can range from 0 to 15 minutes.7,8 In addition, some surveyed hospitals exclude from this indicator the cases that start late owing to access to a postoperative bed. An alternative definition of start time offered in the literature is the time of incision (in lieu of patient in the OR). This has been suggested as a superior benchmark for start time since the patient and all OR staff must be present before the incision can be made.9

Operating room use measures the percentage of OR time used against that which was budgeted. More specifically, the indicator can be used to measure the extent to which the perioperative unit as a whole uses regular hours of operation for patient care or the extent to which a service or surgeon uses its allocated block time for patient care. However, there are a number of different forms of this indicator both in the literature and among surveyed hospitals. For example, prime-time use examines use during regular elective hours, whereas non-prime-time use examines use outside regular elective hours.4 Raw use counts only the time that a patient is in the OR during block or resource time when calculating room use, whereas adjusted use counts the time that a patient is in the OR and includes some “credit” for clean-up and set-up time between procedures.10 Both prime time and non–prime time use should be determined. Moreover, raw use is preferred, but requires separate measures of turnover and can be expected to vary substantially across services (owing to variations in case length and turnover times).

Percentage of unplanned closures measures the percentage of OR time lost owing to unplanned closures, which can result from the unavailability of human resources or hospital resources (e.g., beds, supplies or equipment), inadequate notification of OR time forfeited by services or surgeons, and environmental factors (e.g., flu pandemic). For the purposes of this indicator, what constitutes an unplanned closure varies across surveyed hospitals. However, hospitals surveyed expressed the importance of tracking the reasons for any unplanned closures to facilitate appropriate corrective action, as unplanned closures negatively impact resource use. This indicator needs to be captured with consensus to explain reasons and definitions.

Case duration accuracy measures the percentage of OR cases with durations that are accurately estimated. Typically, an accurate estimate is when the variance between the actual and estimated (scheduled) case duration is within a certain threshold (most commonly 15 min).7,8 There is variability, however, among surveyed hospitals and in the literature in terms of how case duration is defined. Case duration can be defined as the time from the start of room set-up (before the case) to the completion of room clean-up (after the case); the time from patient in the room to patient out of the room; or cut-to-close time. The first option considers the accuracy of turnover time estimates, whereas the latter 2 do not. Case duration should be measured as patient in the room to patient out of the room, with a separate measure of turnover.

Average turnover time measures the average time elapsed between cases and is commonly defined as the time from the prior patient exiting the OR to the succeeding patient entering the OR.10 It includes clean-up and set-up time, but should generally exclude delays between cases (i.e., a gap in the schedule).2 However, how such delays are identified and excluded from average turnover time varies across the surveyed hospitals and may materially impact the measurement of the indicator. For example, some hospitals consider all times between cases that exceed a defined interval (e.g., 60 min) to be delays and exclude them from the calculation of the indicator. Other hospitals exclude only those instances where a specific delay code has been recorded from the calculation of average turnover time. Further analyses will be required to determine ideal exclusion criteria for calculating average turnover times.

Excess staffing costs measures the staffing costs associated with underused and overused OR time. This indicator can be measured in multiple ways. Most simply, it can measure any costs above baseline or budgeted staffing levels. A more sophisticated version of this indicator directly measures the excess staffing costs associated with underused and overused OR time. For example, if only 4 hours of cases are scheduled into a fully staffed 8-hour OR day, the excess staffing costs owing to this underused time would be 50% (8 h staffed for 4 h of surgery). On the other hand, if 9 hours of elective cases are performed in an 8-hour OR day, the excess staffing costs owing to this overused time would be 12.5% (1 h/8 h) multiplied by the additional cost of overtime.2 (In this example, if overtime is considered to be double the cost of regular time, the excess staffing costs would be 25%). Owing to variations in case mix and cost structure across hospitals, this indicator may not be particularly well suited for external benchmarking. As such, hospitals should use it for internal monitoring purposes in conjunction with the indicators discussed previously to help understand root causes for excess staffing costs.

In conclusion, developing a scorecard or dashboard tracking a set of core performance indicators is essential for measuring, monitoring and benchmarking OR performance and efficiency. Although we identified 8 indicators as most critical for monitoring OR performance among 15 children’s hospitals, definitions for these indicators vary in the literature and across the hospitals. The time has come for surgeons and administrators to agree on common definitions to ensure we are using resources effectively and efficiently. The most logical course would be for professional associations to agree to develop common metrics and definitions. While each institution may have specific needs, a core set of measures must be developed. Similar to the development of the Paediatric Access Targets for Surgery, a consensus of content experts, including surgeons, anesthesiologists, nurses and hospital administrators could resolve the issue of variable metrics and definitions.

Acknowledgements

Publication of this document has been made possible through a financial contribution from Health Canada. The views expressed herein do not necessarily represent the views of Health Canada.

Footnotes

Competing interests: None declared.

Contributors: The authors contributed equally to this work.

References

  • 1.Healthcare Financial Management Association; McKesson Information Solutions. Comprehensive performance management in the operating room. Healthc Financ Manage. 2002;56(suppl):1–7. following 80. [PubMed] [Google Scholar]
  • 2.Macario A. Are your hospital operating rooms “efficient”? A scoring system with eight performance indicators. Anesthesiology. 2006;105:237–40. doi: 10.1097/00000542-200608000-00004. [DOI] [PubMed] [Google Scholar]
  • 3.Spath P. Practical guide for improving performance. OR Manager. 2004;20:20–3. [PubMed] [Google Scholar]
  • 4.Zellermeyer V. Report of the Surgical Process Analysis and Improvement Expert Panel. Toronto (ON): Ontario Ministry of Health and Long-Term Care; 2005. [accessed 2008 Sep. 26]. Available: www.ontla.on.ca/library/repository/mon/12000/256887.pdf. [Google Scholar]
  • 5.2007 Annual Report of the Office of the Auditor General of Ontario. Toronto (ON): The Office; 2007. [accessed 2009 Nov. 6]. Chapter 3, Section 3.09. Hospitals — Management and use of surgical facilities; pp. 207–31. Available: www.auditor.on.ca/en/reports_en/en07/309en07.pdf. [Google Scholar]
  • 6.Wright JG, Menaker RJ Canadian Paediatric Surgical Wait Times Project Study Group. Waiting for children’s surgery in Canada: the Canadian Paediatric Surgical Wait Times project. CMAJ. 2011;183:E559–64. doi: 10.1503/cmaj.101530. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Heiser R. Schedule and block time management in the real world. Best Practice Perioperative Program Design (Sullivan Healthcare Consulting); 2009 Sept. 28–29; Toronto, ON. [Google Scholar]
  • 8.OR Benchmarks Collaborative: performance indicators definitions and calculations. Release 3.0. San Francisco (CA): McKesson Corporation; 2009. [Google Scholar]
  • 9.What it takes to get OR cases started on time in the morning. OR Manager. 2005;2112:1. 16. [PubMed] [Google Scholar]
  • 10.Donham RT. Defining measurable OR-PR scheduling, efficiency, and utilization data elements: the Association of Anesthesia Clinical Directors procedural times glossary. Int Anesthesiol Clin. 1998;1:15–29. doi: 10.1097/00004311-199803610-00005. [DOI] [PubMed] [Google Scholar]

Articles from Canadian Journal of Surgery are provided here courtesy of Canadian Medical Association

RESOURCES