Abstract
BACKGROUND
Rapid response teams (RRT) are used to prevent adverse events in patients with acute clinical deterioration, and to save costs of unnecessary transfer in patients with lower-acuity problems. However, determining the optimal use of RRT services is challenging. One method of benchmarking performance is to determine whether a department's event rate is commensurate with its volume and acuity.
STUDY DESIGN
Using admissions between 2009 and 2011 to 18 distinct surgical services at a tertiary care center, we developed logistic regression models to predict RRT activation, accounting for days at-risk for RRT and patient acuity, using claims modifiers for risk of mortality (ROM) and severity of illness (SOI). The model was used to compute observed-to-expected (O/E) RRT use by service.
RESULTS
Of 45,651 admissions, 728 (1.6%, or 3.2 per 1,000 inpatient days) resulted in 1 or more RRT activations. Use varied widely across services (0.4% to 6.2% of admissions; 1.39 to 8.73 per 1,000 inpatient days, unadjusted). In the multivariable model, the greatest contributors to the likelihood of RRT were days at risk, SOI, and ROM. The O/E RRT use ranged from 0.32 to 2.82 across services, with 8 services having an observed value that was significantly higher or lower than predicted by the model.
CONCLUSIONS
We developed a tool for identifying outlying use of an important institutional medical resource. The O/E computation provides a starting point for further investigation into the reasons for variability among services, and a benchmark for quality and process improvement efforts in patient safety.
Rapid response teams (RRT), also known as medical emergency teams, have been implemented in hospitals in order to prevent adverse events in patients with acute clinical deterioration.1 The rationale for implementing RRTs is simple and intuitive; often patients experience clinical deterioration manifested by changes in sensorium, abnormal vital signs, or other concerning symptoms and signs, well before experiencing a cardiac or respiratory arrest. Therefore, identifying such a patient and intervening at an earlier stage in order to stabilize or triage the patient to a higher level of care could prevent morbidity or mortality. Evidence of the “failure to rescue” such deteriorating patients with existing hospital resources has prompted the widespread adoption of RRTs.2,3 In addition, RRTs have the potential to save costs by avoiding unnecessary transfer in patients with lower-acuity problems.
Typical RRTs consist of critical care nurses, nurse practitioners, and/or respiratory therapists, with critical care physicians involved as needed. Most hospitals have an RRT oversight steering committee involving ICU medical directors, critical care physicians, nursing leaders, and administrators, who help develop protocols, provide training and education, guide debriefings after calls, collect and review data, and initiate process improvement. Criteria for calling the RRT typically include acute changes in vital signs as well as staff concern (“afferent limb”). The RRT is then tasked with evaluating the patient, providing appropriate treatment including critical care intervention, and triaging the patient to a higher level of care if necessary (“efferent limb”). This model aims to facilitate the “rescue” of deteriorating patients and potentially save lives.
Despite their broad implementation, evidence for the effectiveness of RRTs is mixed, in part due to difficulty demonstrating an impact of RRTs on preventable adverse outcomes and cost of care.4-7 An alternative to measuring the impact of RRTs on downstream outcomes and cost is to begin by benchmarking the use of RRTs to determine whether a department's use is commensurate with its volume and acuity when compared with other services. Therefore, we aimed to measure and compare service-level use of RRT activations, accounting for the volume and patient acuity on each service.
METHODS
This project was not regulated by the Institutional Review Board because of its primary role as a quality improvement project. After a pilot program from October 2005 to March 2006, Vanderbilt University Medical Center instituted an RRT on April 1, 2006. The RRT at Vanderbilt follows a liberal policy for activation, wherein any doctor, nurse, staff member, patient, visitor, or family member may activate the RRT in response to early warning signs of a medical emergency (Table 1) or, even if they notice “something is just not right.” Patients and families are informed of the policy on admission, and a poster displaying the phone number is posted in each patient's room. The team comprises a registered nurse or charge nurse from the ICU, respiratory care supervisor or designee, a nurse practitioner or physician assistant from the ICU, and an ICU attending or physician designee as needed. Once the RRT arrives at the bedside, its goals are to stabilize the patient; decide on and initiate immediate management; triage the patient to the appropriate level of care; and coordinate care and facilitate communication with the primary team and/or ICU physicians. Care is facilitated by a structured process flowchart and a set of algorithms, such as evaluation of the common initiating signs (eg, bradycardia, tachycardia, hypoxemia, tachypnea, hypotension, opiate overdosage or sedation), and management of common diagnoses (eg, sepsis, medication error).
Table 1.
Rapid Response Team Poster, Displayed in Each Patient's Room and Distributed around the Hospital
| EARLY WARNING SIGNS for Calling the Rapid Response Team | |
|---|---|
| If the patient displays any for the following “EARLY WARNING SIGNS”: Call 1-1111 and request the Rapid Response Team without delay; Then call the patient's primary team physician | |
| Staff concerned/worried | “THE PATIENT DOES NOT LOOK/ACT RIGHT,” gut instinct that patient is beginning a downward spiral even if none of the physiological triggers have yet occurred. |
| Change in respiratory rate | The patient's RESPIRATORY RATE is less than 8 or greater than 30 breaths per minute. |
| Change in oxygenation | PULSE OXIMETER decreases below 90% or there is an INCREASE IN 02 requirements >8L. |
| Labored breathing | The patient's BREATHING BECOMES LABORED. |
| Change in heart rate | The patient's HEART RATE changes to less than 40 bpm or greater than 120 bpm. |
| Change in blood pressure | The patient's SYSTOLIC BLOOD PRESSURE drops below 90 mmHg or rises above 200 mmHg. |
| Chest pain | Patient complains of CHEST PAIN. |
| Hemorrhage | The patient develops uncontrolled bleeding from any site or port. |
| Decreased level of consciousness | The patient becomes SOMNOLENT, DIFFICULT TO AROUSE, CONFUSED OR OBTUNDED. |
| Onset of agitation/delirium | The patient becomes AGITATED OR DELIRIOUS. |
| Seizure | The patient has a SEIZURE. |
| Other alterations in consciousness | ANY OTHER CHANGES IN MENTAL STATUS OR CNS STATUS such as a sudden blown pupil, onset of slurred speech, onset of unilateral limb or facial weakness, etc. |
Bpm, beats per minute.
During a 3-year period, from January 2009 to December 2011, data were collected prospectively on all adult patients with RRT activations, using a methodology adapted from guidelines proposed previously.8 The database of RRT activations was managed using the Research Electronic Data Capture (REDCap) application, a secure web-based data management system developed and hosted at Vanderbilt University.9 This database was used to identify patients who had an RRT activation and the date of the RRT activation. It was then linked to an institutional administrative claims database, the Enterprise Data Warehouse, through which we obtained additional information on all adult (age > 18 years) patients admitted to 18 selected surgical services during this period.
Variables collected from the Enterprise Data Warehouse on each patient included age, sex, race, admission source (home/clinic, emergency department, transfer from another facility), admission type (elective, urgent, emergent), and payer (private, Medicare, other). The number of days at risk for an RRT activation was calculated as length of stay minus days in ICU for patients who did not have an RRT activation, and included days in the step-down unit. For patients who experienced an RRT activation, days at risk were defined as length of stay until the RRT activation. Each admission was treated as a separate subject, such that some patients had more than 1 admission. However, within each admission, we counted only the first RRT among the outcome events and in calculating days at risk. In order to quantify patient acuity, we used modifiers to Medicare-Severity Diagnosis Related Group (MS-DRG), known as severity of illness (SOI) and risk of mortality (ROM), each of which is scored on a 4-level scale: minor, moderate, major, extreme.10 These are used for billing purposes and are calculated by medical coders at the time of discharge routinely for each patient.
We compared these characteristics across patients who did and did not have an RRT activation, using bivariate statistics. Next, we constructed a series of patient-level logistic regression models to identify the contributors to likelihood RRT activation. The first model contained only days at risk; the next added all remaining variables except the measures of patient acuity; the full model included all covariates, including SOI and ROM. Each model was run with and without a categorical indicator variable representing each surgical service, entered as a fixed effect, in order to estimate the contribution of service-level factors to the discriminative ability of the model. The full model, excluding surgical service, was then used to compute the likelihood of RRT activation for each patient. This likelihood was then aggregated for each service, and observed-to-expected (O/E) RRT use was calculated. By accounting for each patient's length of stay, and aggregating the estimated likelihood of RRT activation for all patients admitted to a particular service, the expected number of RRTs for each service essentially accounts for the volume of the service. The expected number of RRTs for each service also accounts for acuity of patients on that service in a similar fashion, by including each patient's SOI and ROM in the model.
The Hosmer-Lemeshow goodness-of-fit test was used for calibration of the models. The area under the receiver operating characteristic curve (AUC) was computed as a measure of discrimination.
As a sensitivity analysis, we ran the full model as a Cox proportional hazards model, in order to determine whether removing days at risk as an independent variable and treating RRT as a time-dependent outcome would yield different results from the logistic regression model.
All analyses were performed with Stata version 11.2 (StataCorp), and R version 2.15.1 (R Foundation for Statistical Computing). A 2-sided p value of < 0.05 was considered statistically significant.
RESULTS
We identified 45,651 admissions during the study period, of which 728 resulted in 1 or more RRT activations (1.6%). There were 224,610 total inpatient days, and 3.2 RRT activations per 1,000 inpatient days. As one would expect, before adjustment for service volume and patient acuity, there was marked variability in the number of RRT activations per service (mean 40, median 23, range 2 to 176). The number of RRT activations per service is presented in the x-axis label of the Figure 1.
Figure 1.
Observed-to-expected (O:E) use of rapid response team across 18 surgical services. *p < 0.05 and **p < 0.001 for comparison of observed with expected. Adm, admissions; Urol, urologic surgery, Service #8.
The characteristics of patients who did and did not undergo an RRT activation are presented in Table 2. Those requiring RRT activation were older, more commonly admitted from home/clinic or as hospital transfers than through the emergency department, and more commonly were insured by Medicare. As expected, patients requiring RRT activation had a higher number of days at risk, higher SOI, and a higher ROM.
Table 2.
Characteristics of Patients Who Did and Did Not Undergo Rapid Response Team Activation
| Characteristic | All (45,651) | RRT (728) | No RRT (44,923) | p Value |
|---|---|---|---|---|
| Age, y, mean (SD) | 52.3 (17.2) | 57.8 (16.2) | 52.2 (17.2) | <0.001 |
| Sex, % | ||||
| Male | 54.7 | 52.7 | 54.8 | 0.28 |
| Female | 45.3 | 47.3 | 45.2 | |
| Race, %* | ||||
| White | 88.4 | 87.8 | 88.4 | 0.60 |
| Non-white | 11.6 | 12.3 | 11.6 | |
| Admission type, % | ||||
| Elective | 56.2 | 55.1 | 56.2 | 0.76 |
| Emergency | 36.8 | 37.4 | 36.8 | |
| Urgent | 7.0 | 7.5 | 7.0 | |
| Admission source, % | ||||
| Emergency Dept. | 13.3 | 6.6 | 13.4 | <0.001 |
| Home/Clinic | 75.6 | 78.3 | 75.6 | |
| Hospital transfer | 11.1 | 15.1 | 11.0 | |
| Insurance, % | ||||
| Private | 42.2 | 32.4 | 42.4 | <0.001 |
| Medicare | 33.9 | 51.0 | 33.6 | |
| Other | 23.9 | 16.6 | 24.0 | |
| Days at risk, median (IQR) | 2 (1–4) | 2.2 (0.8–5.6) 2.0 (1–4) | 0.004 | |
| Severity of illness, % | ||||
| Minor | 27.7 | 6.5 | 28.1 | <0.001 |
| Moderate | 38.4 | 22.1 | 38.7 | |
| Major | 23.2 | 32.7 | 23.0 | |
| Extreme | 10.7 | 38.7 | 10.2 | |
| Risk of mortality, % | ||||
| Minor | 58.3 | 23.6 | 58.9 | <0.001 |
| Moderate | 22.3 | 22.7 | 22.2 | |
| Major | 11.1 | 25.3 | 10.9 | |
| Extreme | 8.3 | 28.4 | 8.0 | |
Each admission was treated as a separate subject, such that some patients had more than 1 admission. Therefore, a particular patient may be represented more than once, although the characteristics of each admission may be unique. Within each admission, we counted only the first RRT among the outcome events and in calculating days at risk.
Data for race were missing for 3.6%; remaining variables were missing <0.05% or were complete.
RRT, rapid response team activation.
A series of models was then constructed to evaluate contributors to the likelihood of an RRT activation (Table 3). Accounting for days at risk alone (model 1A) had poor predictive accuracy, and the addition of a surgical service indicator variable to that model (1B) demonstrated a large and statistically significant improvement in the area under the curve (AUC 0.53 to 0.64, p < 0.001), suggesting that service volume accounts for only a small proportion of the variability in use across services. The next set of models (2A and 2B) accounted for days at risk, patient demographics, admission source, admission type, and payer. Note that the model is more accurate compared with days at risk alone, and improves significantly with the addition of the surgical service indicator variable (AUC 0.65 to 0.69, p < 0.001). The full models (3A and 3B) account for all variables, including ROM and SOI, which substantially improve the accuracy of the model (AUC 0.77). Adding the surgical service indicator variable (3B) improves the accuracy of the model only slightly, to 0.79, although that improvement is statistically significant (p < 0.001), suggesting that surgical service attributes still contribute to the variability in use of RRT activations, even when accounting for patient demographics, days at risk, and patient acuity.
Table 3.
Multivariable Models of Use of Rapid Response Team Activation
| Model no. | Independent variables | Indicator for service | AUC | LR test for improvement |
|---|---|---|---|---|
| 1A | Days at risk only | No | 0.53 | |
| 1B | Days at risk | Yes | 0.64 | <0.001 |
| 2A | Age, sex, race, admission source, admission type, insurance, time at risk | No | 0.65 | |
| 2B | Age, sex, race, admission source, admission type, insurance, time at risk | Yes | 0.69 | <0.001 |
| 3A* | SOI, ROM, age, sex, race, admission source, admission type, insurance, time at risk | No | 0.77 | |
| 3B | SOI, ROM, age, sex, race, admission source, admission type, insurance, time at risk | Yes | 0.79 | <0.001 |
Model 3A is presented in its entirety in Table 4, and is the basis for the observed-to-expected computation.
AUC, area under the receiver operator characteristics curve, a measure of model discrimination; LR test, likelihood ratio test, a measure of model improvement comparing each ‘B’ model with its ‘A’ counterpart; ROM, risk of mortality; SOI, severity of illness.
The full model, excluding the surgical service indicator variable (model 3A), is presented in Table 4. Higher use of the RRT was noted among women, patients admitted electively, patients transferred in or admitted from home or clinic, patients insured by Medicare, patients with more days at risk, and those with higher SOI and ROM. The analogous Cox model yielded similar results in terms of effect direction, magnitude, and significance (data not shown).
Table 4.
Full Multivariable Model of Utilization of Rapid Response Team Activation (Model 3A)
| Variable | Referent | Odds ratio | 95% CI | p Value |
|---|---|---|---|---|
| Age, y | 1.00 | 1.00–1.01 | 0.12 | |
| Sex, female | Male | 1.18 | 1.01–1.31 | 0.039 |
| Race, non-white | White | 1.08 | 0.86–1.38 | 0.50 |
| Admission type | ||||
| Emergency | Elective | 0.75 | 0.61–0.93 | 0.008 |
| Urgent | 0.61 | 0.44–0.84 | 0.003 | |
| Admission source | ||||
| Home/clinic | Emergency Dept | 2.60 | 1.86–3.64 | <0.001 |
| Hospital transfer | 2.07 | 1.44–2.98 | <0.001 | |
| Insurance | ||||
| Medicare | Private | 1.31 | 1.07–1.59 | 0.008 |
| Other | 0.85 | 0.67–1.08 | 0.18 | |
| Days at risk | 1.03 | 1.02–1.04 | <0.001 | |
| Severity of illness | ||||
| Moderate | Minor | 2.35 | 1.66–3.31 | <0.001 |
| Major | 4.63 | 3.16–6.79 | <0.001 | |
| Extreme | 11.23 | 7.19–17.53 | <0.001 | |
| Risk of mortality | ||||
| Moderate | Minor | 1.40 | 1.09–1.81 | 0.009 |
| Major | 1.86 | 1.36–2.54 | <0.001 | |
| Extreme | 1.98 | 1.38–2.83 | <0.001 |
Fixed-effects logistic regression model, with patient as the unit of analysis. The indicator variable for surgical service is omitted so that observed/expected (O/E) use can be calculated for each service. Computation of O/E is based on the sum of estimate risk for rapid response team activation for each patient admitted to each service.
Observed-to-expected use of RRT was computed for each service based on the full model (3A), and is presented graphically in Figure 1. Observed-to-expected use of RRT ranged from 0.39 to 2.82. For 8 of 18 services, observed use differed from expected use by a statistically significant margin (Fig. 1). The precision of O/E use estimated by the model is a function of magnitude of the difference between observed and expected, as well as the number of admissions and events. Because of this methodology, among some of the smaller services, outlying O/E ratios are not statistically significantly different from 1, while some larger services have an O/E closer to 1, but significantly different from 1.
As an example of the degree of variability accounted for by the model, urologic surgery was the fourth highest user of RRTs, with 52 RRT activations over 3 years. Considering urologic surgery's use as a proportion of admissions (1.2%), ranking 14th out of 18 services, and, expressed as use per 1,000 inpatient days, urologic surgery was 10th of 18, with 4.2 calls per 1,000 inpatient days. Accounting for predicted use by the multivariable model, urologic surgery's O/E use was 1.0 (Service #8 in Figure 1). This demonstrates that, considering the use of RRTs by all surgical services at our hospital during this time period, urologic surgery's use was exactly what the model would predict, accounting for its patient volume and acuity.
DISCUSSION
In this study, we developed a model to benchmark the use of RRTs across surgical services at our institution. We demonstrated that variables available in the hospital's claims data set, including patient demographics, admission source, admission type, days-at-risk, SOI, and ROM are associated with RRT utilization. Controlling for these factors, which, in large part, represent patient volume, demographics, and acuity, accounted for much of the variability across services. Nonetheless, there is still significant interservice variability in RRT use, and in some services observed use differs significantly from expected. In this regard, our model helps quantify the amount of variability in use of RRTs across surgical services that is attributable to volume and acuity and the remaining variability, which may be due to specific patient characteristics, attributes of the surgical service, or other elements of patient care that vary between services.
Demonstrating the effectiveness of RRTs has presented a measurement challenge to hospital safety officers, administrators, and researchers. Much of the evidence for RRT's effectiveness is based on nonrandomized and observational studies, some of which have shown a decline in cardiac arrest, and some showing a greater benefit with higher RRT “dose” (greater number of calls per 1,000 admissions).1,4-6 One multicenter cluster randomized controlled trial showed a rise in use of emergency calls, but no benefit from RRTs in terms of incidence of cardiac arrest, unplanned ICU admissions, or unexpected death among hospitals randomized to initiation of a medical emergency team or RRT.7 Complicating the measurement of benefit of RRTs is the difficulty determining the optimal use of RRT resources because there are no evidence-based guidelines for RRT use, and distinguishing between appropriate and unnecessary RRT calls remains a challenge.
Nonetheless, the use of RRTs has disseminated rapidly across hospitals large and small, on the conviction that patient safety is enhanced by implementing a structured approach to deteriorating patients.8 There may be other benefits as well, such as increasing awareness of the need to recognize warning signs of deterioration and react swiftly with involvement of clinicians skilled in critical care management.8 In addition to systematizing care for these patients and drawing attention to them, RRTs provide a mechanism for evaluating and improving the quality of care for deteriorating patients.8 Finally, from an administrative perspective, improving the safety and efficiency of management of these patients could translate into a net savings for the hospital and improvement in metrics on reportable adverse outcomes.
Every quality improvement paradigm begins with identifying a problem and measuring current performance before initiating an intervention to improve performance. Computation of O/E adverse event rates has been widely used to benchmark performance on quality-of-care metrics. Public reporting of such measures has been used to promote quality improvement and patient safety.11-13 Perhaps the greatest value of such information is in providing comparative performance feedback against which future changes in use can be measured. The National Surgery Quality Improvement Program (NSQIP) was established in 1994 to measure performance on specific processes of care and outcomes and provide feedback to participating centers in order to foster quality improvement. The NSQIP computes O/E morbidity and mortality, provides this feedback to participants, along with commendations for sites with low O/E metrics and letters of “high outlier status” of varying grades of concern to sites with high O/E metrics. In the 4 years after initiating this system, 30-day mortality decreased by 9.6% and post-surgical complications declined by 30%, and although there may be many reasons for that decline, the NSQIP comparative performance feedback system is thought to play a role.
This study has several important strengths and limitations. The model we developed is static e the weight assigned to each variable in the model is based on 3 years of historical data, and it may be necessary to recalibrate and update the model in the future. Additionally, variability in O/E RRT use could signify differences in how well the model predicts use on specific services, rather than actual over- or underuse. Third, because the model is based on variables readily available at the time of discharge, it is an excellent resource from an administrative and quality improvement perspective. However, it cannot be used to predict which patients will need RRT calls. We have ongoing efforts to link the RRT database, the administrative database, and a clinical database, with individual patient attributes, such as comorbidities, operating room time, estimated blood loss, and other intraoperative variables, in order to develop a predictive model that could help target patients for pre-emptive intervention. The use of a logistic regression model, rather than a Cox proportional hazards model, could be challenged because an RRT call may be viewed as a time-dependent outcome. We selected the former, with days at risk as an independent variable, because the use of a logistic model facilitates computation of O/E, which is a familiar reporting format for quality and safety data. Our sensitivity analysis demonstrated that the 2 modeling techniques yielded similar results. Finally, and perhaps obviously, this is an institution-specific model. An O/E of 1 does not mean optimal use. Optimal use remains elusive because measures for distinguishing appropriate RRT calls from unnecessary calls have yet to be developed. An O/E of 1 simply means that, given the overall use of RRT calls among surgical services at our hospital, the service demonstrated a level of use that would be expected, on the basis of its patient mix and number of inpatient days at risk.
CONCLUSIONS
In summary, we developed a tool for quantifying service-level variability in the use of an important institutional medical resource. The O/E computation provides a basis for comparative feedback among department heads and administrators. Furthermore, it provides a starting point for further investigation into the reasons for variability among services, and a benchmark for quality and process improvement efforts in the use of RRTs to manage deteriorating surgical patients.
Acknowledgement
We acknowledge the substantial contributions from Michael Marino and Henry Domenico from Vanderbilt's Center for Clinical Improvement, and to the nurses, nurse practitioners, respiratory therapists, and physicians who contribute to the efforts of the Rapid Response Team.
Funding Support: Vanderbilt Institute for Clinical and Translational Research grant support (UL1 TR000445 from NCATS/NIH), Anesthesia Patient Safety Foundation.
Abbreviations and Acronyms
- AUC
area under the curve
- O/E
observed-to-expected
- ROM
risk of mortality
- RRT
rapid response team
- SOI
severity of Illness
Footnotes
Disclosure Information: Nothing to disclose.
Some of the data were presented at the American Urological Association Annual Meeting, San Diego, CA, May 2013.
Author Contributions
Study conception and design: Barocas, Penson, Weavind, Dmochowski Acquisition of data: Kulahalli, Ehrenfeld, Kapu, You Analysis and interpretation of data: Barocas, Ehrenfeld, Penson, You, Weavind, Dmochowski Drafting of manuscript: Barocas, Kulahalli, You Critical revision: Ehrenfeld, Kapu, Penson,Weavind, Dmochowski
REFERENCES
- 1.Jones DA, DeVita MA, Bellomo R. Rapid-response teams. N Engl J Med. 2011;365:139–146. doi: 10.1056/NEJMra0910926. [DOI] [PubMed] [Google Scholar]
- 2.Devita MA, Bellomo R, Hillman K, et al. Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34:2463–2478. doi: 10.1097/01.CCM.0000235743.38172.6E. [DOI] [PubMed] [Google Scholar]
- 3.Hillman K, Parr M, Flabouris A, et al. Redefining in-hospital resuscitation: The concept of the medical emergency team. Resuscitation. 2001;48:105–110. doi: 10.1016/s0300-9572(00)00334-8. [DOI] [PubMed] [Google Scholar]
- 4.Bellomo R, Goldsmith D, Uchino S, et al. A prospective before-and-after trial of a medical emergency team. Med J Aust. 2003;179:283–287. doi: 10.5694/j.1326-5377.2003.tb05548.x. [DOI] [PubMed] [Google Scholar]
- 5.Sebat F, Musthafa AA, Johnson D, et al. Effect of a rapid response system for patients in shock on time to treatment and mortality during 5 years. Crit Care Med. 2007;35:2568–2575. doi: 10.1097/01.CCM.0000287593.54658.89. [DOI] [PubMed] [Google Scholar]
- 6.Jones D, Bellomo R, DeVita MA. Effectiveness of the medical emergency team: The importance of dose. Crit Care. 2009;13:313. doi: 10.1186/cc7996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Hillman K, Chen J, Cretikos M, et al. Introduction of the medical emergency team (MET) system: A cluster-randomised controlled trial. Lancet. 2005;365:2091–2097. doi: 10.1016/S0140-6736(05)66733-5. [DOI] [PubMed] [Google Scholar]
- 8.Cretikos M, Parr M, Hillman K, et al. Guidelines for the uniform reporting of data for medical emergency teams. Resuscitation. 2006;68:11–25. doi: 10.1016/j.resuscitation.2005.06.009. [DOI] [PubMed] [Google Scholar]
- 9.Harris PA, Taylor R, Thielke R, et al. Research electronic data capture (REDCap)–a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377–381. doi: 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. [September 26, 2013];Mortality risk adjustmentmethodology for university health system’s clinical database [homepage on the Internet] 2009 Available from: http://www.ahrq.gov/legacy/qual/mortality/Meurer.htm.
- 11.Khuri SF, Daley J, Henderson W, et al. The Department of Veterans Affairs’ NSQIP: The first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. National VA Surgical Quality Improvement Program. Ann Surg. 1998;228:491–507. doi: 10.1097/00000658-199810000-00006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Klugman R, Allen L, Benjamin EM, et al. Mortality rates as a measure of quality and safety, “caveat emptor.”. Am J Med Qual. 2010;25:197–201. doi: 10.1177/1062860609357467. [DOI] [PubMed] [Google Scholar]
- 13.Best WR, Cowper DC. The ratio of observed-to-expected mortality as a quality of care indicator in non-surgical VA patients. Med Care. 1994;32:390–400. doi: 10.1097/00005650-199404000-00007. [DOI] [PubMed] [Google Scholar]

