Skip to main content
Journal of Oncology Practice logoLink to Journal of Oncology Practice
. 2019 May 20;15(6):e583–e592. doi: 10.1200/JOP.18.00521

Improved Compliance With Anesthesia Quality Measures After Implementation of Automated Monthly Feedback

Patrick J McCormick 1,, Cindy Yeoh 1, Raquel M Vicario-Feliciano 2, Kaitlin Ervin 3, Kay See Tan 1, Gloria Yang 1, Meghana Mehta 1, Luis Tollinche 1
PMCID: PMC6565385  PMID: 31107625

Abstract

PURPOSE:

Minimization of postoperative complications is important in patients with cancer. We wished to improve compliance with anesthesiology quality measures through staff education reinforced with automated monthly feedback.

METHODS:

The anesthesiology department implemented a program to capture and report quality metrics. After staff education, monthly e-mail reports were sent to each anesthesiology physician and nurse anesthetist to detail individual compliance rates for a set of quality measures. For each measure, the proportion of patient cases that passed the measure before and after implementation of the program was compared using a two-sample proportion test.

RESULTS:

After exclusions, we analyzed 15 of 23 quality measures. Of the 15 measures, 11 were process measures, and four were outcome measures. Of the 11 process measures, seven demonstrated statistically significant improvements (P < .01). The most improved measure was TEMP-02 (core temperature measurement), which increased from 69.6% to 85.7% (16.1% difference; P < .001). Also improved were PUL-02 (low tidal volume, less than 8 mL/kg ideal body weight; 15.4% difference; P < .001) and NMB-01 (train of four taken; 12.2% difference; P < .001). The outcome measure TEMP-03 (perioperative temperature management) had a statistically significant increase of a small magnitude (0.2% difference; P < .001). No other outcome measures showed statistically significant improvement.

CONCLUSION:

After implementation of a comprehensive quality improvement program, our group observed significant improvements in anesthesia quality measure compliance for several process measures. Future work is needed to determine if this initial success can be preserved and associated with improved outcomes.

INTRODUCTION

Minimization of postoperative complications is especially important in patients with cancer so that they may return to nonsurgical oncologic therapy as soon as possible. Postoperative complications have been found to worsen survival and increase the risk of recurrence in patients with colorectal cancer.1,2 One study found complications in 13.8% of patients with gastric cancer, 3.6% of which were major complications.3

The impact of postoperative complications can be quantified with the metric, return to intended oncologic treatment (RIOT), as described in a study by Aloia et al.4 In a cohort of patients with liver tumors, only 75% of those who had open surgery could RIOT, whereas all 100% of those who had minimally invasive surgery could continue to additional therapy. Of the 25% who could not RIOT, 29 (52%) of 56 could not continue because of postoperative complications.

Recent studies in the anesthesiology literature have associated intraoperative findings with postoperative complications. Sustained intraoperative hypotension has been linked to increased postoperative mortality and end-organ dysfunction.5,6 In a meta-analysis, protective ventilation with tidal volumes is associated with fewer postoperative pulmonary complications.7 Intraoperative hypothermia leads to coagulopathy, increased need for transfusion, surgical site infections, and prolonged recovery.8 Residual neuromuscular blockade has been shown to lead to higher rates of postoperative pneumonia.9

The performance of anesthesiologists and certified registered nurse anesthetists (CRNAs) can be measured by gathering objective data about perioperative practices and outcomes. In 2008, the Multicenter Perioperative Outcomes Group (MPOG) was formed to collect electronic anesthesiology records for research and quality improvement. MPOG now has more than 9 million anesthetic cases. Analysis of these perioperative data can be used to measure variations in quality care, within and across institutions.10-12 A closely related quality organization, Anesthesiology Performance Improvement and Reporting Exchange (ASPIRE), uses MPOG data to generate objective measures of quality. ASPIRE provides anesthesia staff specific performance feedback across various domains of care (eg, intraoperative fluid balance or blood pressure management).13 Specific feedback in the form of a report card via e-mail is given for 22 measures on the basis of a detailed definition of what each measure comprises. For example, the BP-02 measure records the percentage of patient cases in which more than 10 minutes elapsed between blood pressure recordings.

METHODS

ASPIRE Implementation

Our tertiary academic cancer hospital uses an electronic health record (EHR) system to maintain surgery and anesthesia records (Epic Systems, Verona, WI). A separate EHR is used for orders, notes, laboratory results, and medication administration (Allscripts, Chicago, IL). For every anesthesia administration, data about patient demographics, surgical details, anesthesia record, selected laboratory results, and medications are abstracted into a local MPOG database. These data are then deidentified and shared with the MPOG coordinating center. For each patient case and measure, the coordinating center database determines if a quality measure is relevant, whether the case passed or failed the measure, and which anesthesiology providers are responsible for the measure.

During the study period, ASPIRE offered 23 quality measures, which are listed in Appendix Table A1 (online only). Most are process measures, which means that the measures are satisfied by the provider’s performance of a specific element of patient care. For example, ensuring that the interval between blood pressures are no more than 10 minutes apart (ASPIRE measure BP-02) is a process measure. Some measures are outcome measures, which means that satisfaction of the measure depends on a particular outcome. An example outcome measure is AKI-01, for acute kidney injury. This measure interprets an increase in a patient's postoperative creatinine as a sign of acute kidney injury. Detailed descriptions of the numerator and denominator of each measure are available at the ASPIRE Web site.13

ASPIRE measures are maintained by a committee of ASPIRE quality champions from each participating site. Measures are added, removed, and modified after discussion and a majority vote. Some measures originated from prior anesthesia quality work at the national level, which was captured in the merit-based incentive payment system.

The ASPIRE system has a feedback e-mail component and a Web site component. On the fourth week of the month, a report card feedback e-mail is sent to all attending physicians, residents, and CRNAs who treated patients in the prior month (Appendix Fig A1, online only). The e-mail contains a link to the ASPIRE Web site, where individual patient cases can be reviewed along with departmental trends.

Before activation of the monthly ASPIRE feedback e-mails, the scope and content of the project were introduced at an all-staff meeting. The ASPIRE quality measures were detailed, and aims of quality improvement were highlighted in a slide presentation, which was later shared on the departmental Intranet. Departmental quality champions engaged other anesthesiology attending physicians and staff to discuss the program and individual results after the first round of feedback e-mails were sent. At subsequent departmental open quality assurance meetings, we instituted a plan-do-study-act cycle, for which low compliance measures were discussed with the staff and recommendations for systemic change were made, to be revisited with quality measure analysis at future meetings.

Data Collection

This retrospective study was approved by the Memorial Sloan Kettering Cancer Center institutional review board. The requirement for written informed consent was waived by the institutional review board. Quality measure data were collected for all patients who received anesthesia performed between June 1, 2017, and March 31, 2018.

Presentations about ASPIRE quality measures were made to providers in September 2017. The first ASPIRE e-mail to providers was sent on September 27, 2017, and covered patient cases from August 2017. Compliance data were separated into preimplementation and postimplementation groups, and September 1, 2017, was the cutoff date. Implementation was defined as the mailing of the first provider report card feedback e-mail. For each month and each measure, we calculated the total number of patient cases included in the measure and the number that passed the measure. We excluded measures that included fewer than 1,000 patient cases. Because of the large number of providers (more than 200) and the study period of 10 months, individual monthly compliance for low-volume measures was determined by only a few (often zero) patient cases.

Statistical Analysis

For each measure, the proportion of patient cases that passed the measure before and after implementation of the program (before and after the September 1, 2017, cutoff) was determined and compared using a two-sample proportion test. All statistical tests were two tailed; to account for multiple statistical tests and large sample size, we set the P value to equal .01 as the threshold for statistical significance. All calculations were performed using R, version 3.5.0 (R Foundation for Statistical Computing, Vienna, Austria).

RESULTS

During the study period, our practice completed 40,228 patient cases in anesthesia. Appendix Table A2 (online only) shows the number of patient cases included for each measure. Of the 23 measures, 16 had a total denominator greater than 1,000 patient cases across the entire study period. Excluded measures included those about glucose, transfusion management, pediatric postoperative nausea and vomiting, and a post-anesthetic transfer-of-care measure. A measure to track colloid use was also excluded, because our practice disagreed with the intent of the measure; these exclusions left 15 total measures to be analyzed.

The proportion of compliant patient cases for each measure is listed in Table 1. All measures showed improved compliance in the postimplementation phase compared with the preimplementation phase. Of the 11 process measures, seven demonstrated statistically significant improvements (P = .01 or better). The most improved measure was TEMP-02 (core temperature measurement), which increased from 69.6% to 85.7% (a 16.1% difference; P < .001). Also improved were PUL-02 (low tidal volume, less than 8 mL/kg ideal body weight; 15.4% difference; P < .001) and NMB-01 (train of four taken; 12.2% difference; P < .001). The other measures with significant improvement were BP-02 (avoiding monitoring gaps), NMB-02 (reversal administered), PUL-01 (low tidal volume, less than 10 mL/kg ideal body weight), and TEMP-01 (active warming).

TABLE 1.

Compliance Before and After Implementation for All Measures

graphic file with name JOP.18.00521t1.jpg

Of the four outcome measures, only one demonstrated a statistically significant improvement, although the magnitude was small. TEMP-03 (perioperative temperature management) is an outcome measure that is satisfied when a patient has a temperature of 35.5°C or greater at the end of anesthesia. Compliance improved from 99.7% to 99.9%—a difference of 0.2% (P < .001).

A temporal relationship with compliance is demonstrated for some process measures in Figure 1. The vertical dotted line marks month zero, which was the first month of patient cases of anesthesia administration after the initial e-mails were sent to the staff. The most improved measures described in the Results (TEMP-02, PUL-02, NMB-01) all showed significant month-to-month improvement beginning with the advent of the feedback e-mails.

Fig 1.

Fig 1.

Monthly departmental compliance with individual quality measures. Month number −3 corresponds to June 2017. The vertical dotted line at month 0 (September 2017) is the point at which the anesthesiology quality measure program began. Process measures are in the left facet, and outcome measures (AKI-01, CARD-01, CARD-02, and TEMP-03) are in the right facet.

DISCUSSION

Our group demonstrated significant improvements in anesthesia quality measure compliance for several process measures after implementation of a comprehensive quality improvement program. This program included regular staff training and automated monthly reports of individual compliance.

The quality measures that improved the most were process measures that were not being satisfied before the introduction of the ASPIRE measures. A combination of staff education and monthly feedback e-mails contributed to the improvement in process measures. On the basis of discussions with staff members, pre-ASPIRE process measure compliance was poor in many instances, because the anesthesiologists and CRNAs overestimated actual compliance. Some ASPIRE measures are quite strict; for example, just one delay in measurement of blood pressure during a multi-hour patient case results in failure for that measure. As is common with quality improvement initiatives, it is difficult to point to one particular component as most or least responsible for practice changes.

It is well known that individuals who know they are being measured will behave differently than before the study began, which often improves performance. This phenomenon is known as the Hawthorne effect (HE).14 The HE has been used intentionally to improve performance by periodically auditing individuals in a quality improvement program. It is likely that the HE is at work in this study, but it is difficult to quantify the degree of effect. Our intention is to maintain the HE level through celebration of compliance improvements and periodic reminders to staff of ways to improve compliance for difficult measures. Future studies should address the effect of automated monthly e-mailed reports on sustained improvement in quality measures. Our aim is to revisit this project at a year after its inception and report whether the compliance improvements demonstrated in Figure 1 persist.

Our group did not institute any notable process changes during the study period to assist clinicians in meeting the process measures. We provided a slide presentation to providers that described how to pass the process measures by being cognizant of low mean arterial pressure, high tidal volume, and core temperature measurement. ASPIRE champions reviewed this presentation with individual staff in the operating room. The detailed report card that was e-mailed each month allowed providers to refer back to specific patient cases in which they failed process measures, and this motivated our clinicians to improve on these process measures in the following month. Our anesthesiology department later implemented a decision support system to notify clinicians about potential process measure failures, but this occurred after the study period.

Process measure improvements can sometimes be attributed to better documentation without any actual change in care. However, several of the measures are based on vital sign and ventilator data that are collected automatically \each minute. For example, added documentation alone cannot explain improvements in overall mean tidal volume, core body temperature, and mean arterial pressures. These changes must be attributed to something other than documentation.

It is difficult to prove that improvements in process measures lead to improved outcomes. A large study of the timing of surgical antibiotic prophylaxis did not show an association between timely antibiotic administration and surgery site infection occurrence.15 It is possible that this process measure lacks sensitivity or, less likely, that the process measure does not affect outcomes. However, it is equally plausible that the intervention does not change behavior to a large enough degree to affect outcomes. For example, in a large intraoperative decision support study using electronic alerts to notify clinicians of hypotension and low brain activity, patients in the treatment arm had fewer minutes of hypotension and low brain activity but not enough to reduce 90-day mortality.16

An advantage of this quality improvement system compared with past systems is that it is fully automated and relies largely on automatically collected data. Each month, our local MPOG coordinator samples a small set of patient cases and runs a data validation program to ensure that no artifacts have been introduced. No other labor is required to collect the data and disseminate individual reports.

The quality program we describe has been implemented at multiple large institutions, although to our knowledge, we are the only site that is a cancer center. We feel that the program is generalizable and scalable to other cancer centers. There is an upfront implementation cost to transform local EHR data to match the MPOG data schema; after this is accomplished, the month-to-month maintenance is minimal. Future work will determine if this initial success can be preserved and associated with improved outcomes, such as RIOT.

Appendix

Fig A1.

Fig A1.

Sample monthly e-mail to anesthesiology attendings.

TABLE A1.

Description of All ASPIRE Quality Measures

graphic file with name JOP.18.00521ta1.jpg

TABLE A2.

Number of Patient Cases Included for Each Quality Measure

graphic file with name JOP.18.00521ta2.jpg

Footnotes

Supported by the National Cancer Institute of the National Institutes of Health under Awards No. R25CA020449 and P30CA008748.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

AUTHOR CONTRIBUTIONS

Conception and design: Patrick J. McCormick, Cindy Yeoh, Kay See Tan, Luis Tollinche

Collection and assembly of data: Patrick J. McCormick, Cindy Yeoh, Gloria Yang, Meghana Mehta, Luis Tollinche

Data analysis and interpretation: Patrick J. McCormick, Cindy Yeoh, Raquel M. Vicario-Feliciano, Kaitlin Ervin, Kay See Tan, Meghana Mehta, Luis Tollinche

Manuscript writing: All authors

Final approval of manuscript: All authors

Accountable for all aspects of the work: All authors

AUTHORS' DISCLOSURES OF POTENTIAL CONFLICTS OF INTEREST

Improved Compliance With Anesthesia Quality Measures After Implementation of Automated Monthly Feedback

The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc or ascopubs.org/jop/site/ifc/journal-policies.html.

Patrick J. McCormick

Stock and Other Ownership Interests: Johnson & Johnson (I)

Consulting or Advisory Role: TREG Consultants (I)

Luis Tollinche

Research Funding: Merck

No other potential conflicts of interest were reported.

REFERENCES

  • 1.Artinyan A, Orcutt ST, Anaya DA, et al. Infectious postoperative complications decrease long-term survival in patients undergoing curative surgery for colorectal cancer: A study of 12,075 patients. Ann Surg. 2015;261:497–505. doi: 10.1097/SLA.0000000000000854. [DOI] [PubMed] [Google Scholar]
  • 2.Aoyama T, Oba K, Honda M, et al. Impact of postoperative complications on the colorectal cancer survival and recurrence: Analyses of pooled individual patients’ data from three large phase III randomized trials. Cancer Med. 2017;6:1573–1580. doi: 10.1002/cam4.1126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Huang CM, Tu RH, Lin JX, et al. A scoring system to predict the risk of postoperative complications after laparoscopic gastrectomy for gastric cancer based on a large-scale retrospective study. Medicine (Baltimore) 2015;94:e812. doi: 10.1097/MD.0000000000000812. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Aloia TA, Zimmitti G, Conrad C, et al. Return to intended oncologic treatment (RIOT): A novel metric for evaluating the quality of oncosurgical therapy for malignancy. J Surg Oncol. 2014;110:107–114. doi: 10.1002/jso.23626. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Bijker JB, van Klei WA, Vergouwe Y, et al. Intraoperative hypotension and 1-year mortality after noncardiac surgery. Anesthesiology. 2009;111:1217–1226. doi: 10.1097/ALN.0b013e3181c14930. [DOI] [PubMed] [Google Scholar]
  • 6.Sun LY, Wijeysundera DN, Tait GA, et al. Association of intraoperative hypotension with acute kidney injury after elective noncardiac surgery. Anesthesiology. 2015;123:515–523. doi: 10.1097/ALN.0000000000000765. [DOI] [PubMed] [Google Scholar]
  • 7.Serpa Neto A, Hemmes SN, Barbas CS, et al. Protective versus conventional ventilation for surgery: A Systematic review and individual patient data meta-analysis. Anesthesiology. 2015;123:66–78. doi: 10.1097/ALN.0000000000000706. [DOI] [PubMed] [Google Scholar]
  • 8.Sessler DI. Perioperative thermoregulation and heat balance. Lancet. 2016;387:2655–2664. doi: 10.1016/S0140-6736(15)00981-2. [DOI] [PubMed] [Google Scholar]
  • 9.Bulka CM, Terekhov MA, Martin BJ, et al. Nondepolarizing neuromuscular blocking agents, reversal, and risk of postoperative pneumonia. Anesthesiology. 2016;125:647–655. doi: 10.1097/ALN.0000000000001279. [DOI] [PubMed] [Google Scholar]
  • 10.Aziz MF, Healy D, Kheterpal S, et al. Routine clinical practice effectiveness of the Glidescope in difficult airway management: An analysis of 2,004 Glidescope intubations, complications, and failures from two institutions. Anesthesiology. 2011;114:34–41. doi: 10.1097/ALN.0b013e3182023eb7. [DOI] [PubMed] [Google Scholar]
  • 11.Berman MF, Iyer N, Freudzon L, et al. Alarm limits for intraoperative drug infusions: A report from the multicenter perioperative outcomes group. Anesth Analg. 2017;125:1203–1211. doi: 10.1213/ANE.0000000000002305. [DOI] [PubMed] [Google Scholar]
  • 12.Colquhoun DA, Naik BI, Durieux ME, et al. Management of 1-lung ventilation-variation and trends in clinical practice: A report from the multicenter perioperative outcomes group. Anesth Analg. 2018;126:495–502. doi: 10.1213/ANE.0000000000002642. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Multicenter Perioperative Outcomes Group: ASPIRE https://mpog.org/quality/our-measures Quality: Our measures.
  • 14.McCambridge J, Witton J, Elbourne DR. Systematic review of the Hawthorne effect: New concepts are needed to study research participation effects. J Clin Epidemiol. 2014;67:267–277. doi: 10.1016/j.jclinepi.2013.08.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hawn MT, Richman JS, Vick CC, et al. Timing of surgical antibiotic prophylaxis and the risk of surgical site infection. JAMA Surg. 2013;148:649–657. doi: 10.1001/jamasurg.2013.134. [DOI] [PubMed] [Google Scholar]
  • 16.McCormick PJ, Levin MA, Lin HM, et al. Effectiveness of an electronic alert for hypotension and low bispectral index on 90-day postoperative mortality: A prospective, randomized trial. Anesthesiology. 2016;125:1113–1120. doi: 10.1097/ALN.0000000000001296. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Oncology Practice are provided here courtesy of American Society of Clinical Oncology

RESOURCES