Abstract
This paper reviews the field of outcomes measurement in anesthesia and surgery, emphasizing those outcomes that may be influenced by perioperative care. Data sources for outcomes measurement are described, and the concept of risk adjustment is introduced. The basic mechanics of outcomes measurement and its pitfalls are explained. Finally, specific perioperative outcomes - mortality, readmission and composite outcomes - are described and their limitations are considered.
Introduction
Anesthesiologists and surgeons have studied patient outcomes since the late 19th century. Surgeons Ernest Codman and Harvey Cushing developed a precursor to the modern anesthesia record in 1894 after an intraoperative patient demise piqued their interest (1, 2). In 1952, anesthesiologist Virginia Apgar devised her well-known score to better predict outcomes for newborn infants (3). During the last forty years, outcomes measurement has been driven to become more rigorous with demonstrations of variations in care such as the Dartmouth Atlas studies (4).
Healthcare providers, including anesthesiologists, are increasingly being held accountable for the care they deliver. Part of this accountability includes determining whether providers are able to secure the outcomes they set out to achieve. Outcomes measurement, then, has become an important means of determining the quality of care rendered. Healthcare providers are expected to adhere to evidence-based care guidelines (where they exist) and demonstrate that their patients have experienced optimal outcomes as a result.
What is an outcome? The term usually applies to clinically relevant endpoints of care, such as mortality or major morbidity. Outcomes may be objective (e.g. mortality, infection) or subjective (e.g. pain, satisfaction), and may be reported by providers or patients. Classically, the most studied outcomes are those with unambiguous definitions that can be reported by providers. However, there has been an increasing emphasis placed on so-called patient-reported outcomes that may not be apparent to providers, such as symptoms, functioning and quality of life (5, 6).
Why should anesthesiologists care about outcomes measurement? Notwithstanding the centrality of outcomes to the patient care mission, outcomes are one indicator of healthcare quality. As such, outcomes are used to rank providers, hospitals and healthcare systems. These rankings, inform the decisions of payors, policymakers, and perhaps patients (7, 8). Rankings may also impact physician selection of by patients for high-risk procedures such as cardiac surgery (9, 10). Medicare and private payor reimbursements have also been linked to quality metrics, some of which measure healthcare outcomes (11).
Outcomes of potential interest to anesthesiologists may be viewed through the lens of two disparate perspectives. In the first, anesthesiologists are interested in the outcomes over which they have the most influence - perioperative aspiration, awareness under anesthesia, post-operative nausea and vomiting, and post-operative pain control are examples (12). It makes sense for anesthesiologists to maintain “ownership” of these outcomes, critically examining their practice to determine whether they are delivering good care, and adjusting practice to improve the quality of care.
Another worldview, however, places the anesthesiologist in a perioperative team, accountable for all the outcomes that may impact a patient undergoing surgery. In this scheme, the outcomes of interest to the anesthesiologist expand to include mortality and major morbidity such as surgical site infection, hospital acquired infections, and acute renal failure. For these outcomes, responsibility is difficult to assign to either the surgeon or the anesthesiologist in most cases, because the care of both may contribute to the outcomes of interest. The accountability is thus shared by the perioperative team. Such team accountability enables comparison and ranking of hospitals and health systems, but complicates rankings of individual providers.
In this paper, we consider the field of outcomes measurement in anesthesia and surgery, emphasizing those outcomes that may be influenced by perioperative care. We first describe the data sources for outcomes measurement. We introduce the concept of risk adjustment, a basic understanding of which is crucial to understanding outcomes measurement. Finally, we describe three perioperative outcomes and consider limitations to their measurement and reporting.
Data sources
There are usually two types of data used in outcomes measurement: administrative data and clinical data. Administrative data is generally collected for billing or regulatory oversight. The focus of administrative data collection is to succinctly convey patient data to justify charges and to allow facility- and population-level tracking of healthcare delivery. As such, the amount of clinical detail is potentially scarce. The prototypical hospital coding form includes basic demographic information about patients (e.g. age, race, gender, ZIP code, insured status), information about the reason for hospitalization, diagnoses made during the hospitalization, and procedures performed during the hospitalization (13). There is sometimes information about pre-existing comorbidities (called a present-on-admission flag (14)) that helps contextualize the diagnoses made and procedures performed for a particular patient.
A limited number of outcomes are available in most administrative datasets. Mortality is one of the more consistently recorded outcomes in administrative data (13), as is discharge location. In-hospital complications are also captured, but facility-level differences in coding and limitations on the number of complications that can be listed may limit the reliability of these outcomes. For data lacking present-on-admission indicators, it may be difficult to determine whether a given diagnosis should be considered a pre-existing condition or a complication of hospitalization.
Administrative data analysis has formed the basis of a number of important studies in health services research, including the creation of “report cards” comparing of New York state cardiac surgeons’ performance (15, 16) (Box 1). From the perspective of a researcher or hospital administrator, using administrative data is appealing because the data are (relatively) easy to procure, available for patients from a broad geographic distribution, and enable study sample sizes in the thousands to millions, which confers statistical power. From the perspective of a clinician whose care is rated, or from the perspective of a patient or payor trying to compare providers, administrative data may be problematic because they are subject to the documentation and coding practices of individual institutions, which may or may not accurately reflect clinical care (13). Despite this limitation, administrative data are likely to remain central to the conduct of outcomes research because these data are plentiful and are already collected and aggregated for other reasons, which makes them a convenient data source.
Box 1. Case study in administrative data analysis: New York State cardiac surgery report cards.
The problem
In the late 1980s, a cardiac advisory committee to the New York State Department of Health was charged with evaluating the quality of cardiac surgical care in New York (38).
The approach
In 1989, New York State began prospectively collecting CABG procedure data, including patient risk factors, mortality and complications into its Cardiac Surgery Reporting System (38).
-
The reporting system collected 42 patient risk factors including (38):
Demographics: age, race, gender, payor
Cardiac risk factors: hypertension requiring treatment, ejection fraction, previous myocardial infarction, unstable angina
Other risk factors: morbid obesity, chronic obstructive pulmonary disease
Multivariable regression models predicted patient mortality risk based on pre-existing risk factors; this mortality risk was used to generate hospital- and surgeon-specific risk-adjusted mortality rates (16).
In 1991, the first hospital rankings (“report cards”) were released (9).
In 1991, surgeon-specific information was released to comply with a Freedom of Information request by Newsday (16). In 1992, surgeon-specific risk-adjusted performance data were released (9, 16).
Results and implications
Hospital-level and surgeon-level variability in CABG outcomes was demonstrated with collection of administrative data about patients and procedures (16, 39).
-
Outcomes of CABG surgery in New York State improved after reporting was instituted. Reasons for the apparent improvement include:
The overall risk level of patients undergoing CABG after reporting started was lower than that of CABG patients prior to reporting (i.e. more low-risk patients were undergoing CABG) (9).
Patients were better-matched to hospitals - sicker patients received care at higher-quality hospitals (9).
Better-performing hospitals and surgeons increased market share (40).
Some high-risk patients underwent CABG outside of New York State (41).
Some cardiac surgeons retired or stopped performing cardiac surgery (42).
Clinical databases, on the other hand, usually contain more information about a patient’s hospital course, physiologic status and laboratory derangements than do administrative datasets. Prospectively designed data registries also often include patient-reported outcomes and have precise definitions of clinical outcomes that minimize coding differences between facilities. The additional detail available in clinical databases may allow tracking more outcomes than can be tracked with administrative data, and may allow for more robust risk adjustment. However, clinical data collection is usually time-consuming and expensive, limiting the number of facilities willing or able to participate in data collection (17). Additionally, there are no agreed-upon standards for clinical data collection and there are few large registries of perioperative clinical data - the National Surgical Quality Improvement Program (17, 18) and Society of Thoracic Surgeons cardiac surgery database (19) are notable exceptions - so the ability to sample from a broad geographic distribution is usually weaker than it is with administrative data.
Risk (severity) adjustment
Some accounting of patient severity is necessary for making meaningful comparisons between providers, or for comparing the performance of particular providers over time. Consider a hypothetical situation in which Hospital A and Hospital B have identical mortality rates for patients undergoing cardiac surgery. Hospital A treats relatively sick patients, and Hospital B treats relatively healthy patients. If the different levels of patient risk are not considered, Hospitals A and B will appear to render the same quality of care, when in reality, Hospital A probably has higher quality than Hospital B.
Risk adjustment (also known as severity adjustment) is the process of statistically accounting for differences in patient case mix that influence health care outcomes. In a multivariable regression model, patient risk factors can be added to control for their contribution to the outcome of interest. Once this adjustment is performed, residual differences in outcomes are thought to be related to provider quality.
How are risk adjustment models developed? The earliest risk adjustment approaches relied on clinical experience to differentiate riskier patients from less risky patients (20). The ASA physical status classification, while not a true risk score per se, is an example of a patient descriptor based on clinical judgment (21, 22). Contemporary risk adjustment approaches use multivariate regression to derive estimates of an individual patient’s likelihood of experiencing a particular outcome using available data about risk factors. In a demonstration of this approach, Lee et al (23) used multivariable logistic regression to narrow a list of 30 clinical characteristics to six risk factors, which formed the basis of the Revised Cardiac Risk Index.
Even though it is possible to adjust for measured risk factors, the presence of unobserved patient attributes means that risk adjustment is imperfect. Unobserved patient attributes include those characteristics that are knowable but unmeasured, such as functional status in most administrative datasets, as well as characteristics that are difficult to specify and to measure, such as medication adherence or social support. When these unmeasured factors greatly influence an outcome of interest, statistical risk adjustment becomes more challenging, although there are statistical approaches such as instrumental variable analysis (24) that address this problem.
An important potential shortcoming to risk adjusting is that there are several ways to predict outcomes of interest. Different models may be used in risk adjustment, and it is important to realize that no one model performs best across all outcomes. For example, the risk adjustment model best suited to predict mortality in colorectal surgical patients may not perform well when used to consider myocardial infarction-related mortality. Iezzoni illustrated the performance differences between 5 risk adjustment models using a large mixed administrative/clinical data set (20). For mortality after pneumonia and stroke, one model (MedisGroups) performed best, while other models were better predictors of mortality in acute myocardial infarction (Disease Staging) or coronary artery bypass grafting (All Patient Refined Diagnosis Related Groups)(20).
When considering outcomes measurement, it is important to have an appreciation of the risk adjustment technique employed. Some risk adjusters were designed for use in specific clinical settings, so their application outside the original setting may compromise the quality of risk adjustment. For instance, APACHE scores were designed to predict mortality in critically ill patients (25), but have been used to risk adjust for other outcomes, such as ICU length of stay (26).
Applying risk adjustment and comparing outcomes
The increasing use of outcomes comparisons for payment and accreditation purposes draw attention to the methodologies used to accomplish these comparisons. Two issues are commonly encountered when considering how to compare providers: indirect standardization and the small numbers problem.
Indirect standardization
Risk adjustment models are used to determine the expected rate of an outcome of interest for a provider or a population. This expected rate is compared in some way to the actual (observed) rate to make inferences about care quality. Comparison of observed and expected rates of outcomes is usually accomplished with an approach known as indirect standardization (27).
Indirect standardization involves using population-level data to develop either risk strata or regression models to estimate risk of a particular outcome. The risk prediction derived from the larger population is then applied to a hospital or provider’s patient mix, allowing for the calculation of an expected outcome rate. The difference between the number of observed and expected outcomes (O-E) or the ratio of observed to expected outcomes (O/E) is taken to be an indicator of quality. The descriptor indirect is used because risk predictions are not based solely on the patients cared for by a particular provider.
The alternative to indirect standardization is direct standardization (27), in which risk calculations are based on a particular provider’s case mix. The provider-specific predictions are then applied to a prototypical or reference population, which should indicate how the provider would perform if treating a “typical” case mix. There are at least two problems with direct standardization that explain the more common use of the indirect method. First, direct standardization is problematic when providers do not have enough cases to produce reliable estimates of outcome rates within each different categories of risk. For example, if a hospital treats few elderly patients, it is difficult to predict what an elderly patient’s outcome might be in that hospital. Second, direct standardization may obscure very good or very poor performance with particular patient subsets. If a specialty hospital only provides high-quality orthopedic care, for instance, it may appear to be an average or substandard hospital when its risk predictions are applied to a more general population.
The small numbers problem
Outcomes measurement and risk adjustment are statistical procedures that rely on large sample sizes to produce risk estimates and outcome interpretations that are trustworthy. With high-volume providers, observed outcome rates are less likely to be skewed by outliers than are the rates of lower volume providers (28). Consider two hospitals: Hospital C performs 400 orthopedic surgeries each year and had 2 patient deaths in year 1. Hospital D performs 10 orthopedic surgeries each year and had no patient deaths in year 1. The next year, each hospital has one additional patient death. Hospital C’s mortality rate has increased from 0.5% to 0.75%, and Hospital D’s mortality rate has increased from 0% to 10%. Has Hospital D become worse than Hospital C? It is difficult to say - Hospital D’s comparatively low volume means that its mortality rate is more sensitive to random events, and it is more difficult to use its mortality rate as an indicator of quality.
Statistical modeling may be used to decrease the volatility of risk-adjusted outcome rates. One approach is called multilevel modeling, which is also known as hierarchical or random effects modeling (27). Multilevel modeling accomplishes two goals: it accounts for the clustering of patients within provider groups, and it uses data from the entire dataset to adjust individual providers’ outcome rates (27). In the previous example, if the mean population mortality rate were 0.6%, Hospital D’s mortality rate of 10% in year 2 would be adjusted, or “shrunk” toward the population mean. This approach, sometimes called a Bayesian approach, is used in Medicare’s Hospital Compare model (29). Although the Bayesian method has many advocates, some criticize this method because it can improve the apparent performance of low-volume, low-quality providers (30).
Measuring specific outcomes
Multiple metrics of healthcare may be used to infer quality. These vary from measures of process and structural capability to measures of patient outcomes. We focus here on patient outcomes, but it is important to understand how outcomes relate to other commonly used measures. Outcomes measures are appealing because they are easy to understand and have meaning to providers, payors and patients. They are, however, challenging to study because it is difficult to establish clearly causal links between the care rendered and the outcome achieved. Subjective outcomes such as pain, satisfaction and disability present additional measurement problems because patient characteristics affect their perception and reporting, and because they may change over time. In perioperative medicine, there are a number of well-studied outcomes (Box 2). The most commonly studied outcomes are those for which accountability is generally shared; these include mortality, readmission, and composite measures of complications. We explore each of these three outcomes in turn, explaining how they are measured and describing strengths, weaknesses, and considerations in their use.
Box 2. Selected perioperative outcomes.
-
Mortality
Intraoperative
In-hospital
7-, 30-, 60-day, 1 year
-
Readmission
ICU readmission
7-, 30-day
-
Complications
Acute renal failure
Unplanned intubation
Postoperative pneumonia
Stroke
Sepsis
Wound infection
Cardiac arrest
Nausea/vomiting
Length of stay
Cost
Patient satisfaction
Functional status
Quality of life
Mortality
Mortality is an appealing outcome to measure because it is binary and unequivocal. Despite its straightforward definition, however, there are still controversies with respect to how mortality is reported. First, what counts as a death? Patients admitted for elective outpatient procedures are not expected to die and might be included in a mortality metric. On the other hand, when patients are admitted for palliation or hospice care, an argument could be made to exclude their deaths from a mortality quality indicator.
Another, more contentious issue to consider with respect to mortality is when death is counted. For procedure-related mortality, intraoperative death, intensive care unit mortality, and in-hospital mortality have all been considered by different investigators. Given differences in length of stay between facilities, death is increasingly measured a fixed interval of time after a procedure, such as 7 days, 30 days, 6 months, or 1 year. Examining fixed intervals may eliminate mortality differences related to length of stay (31), but introduces the possibility of penalizing hospitals for post-discharge care, over which they have limited influence. Despite this limitation, fixed-interval mortality, specifically 30-day mortality, has become a common endpoint in both medical and surgical outcomes studies (32).
Another problem with measuring mortality post-discharge is that multiple data sources must usually be employed to determine whether patients died post-discharge. Whereas a hospital might have reasonable trust in its in-hospital mortality figures, linking patient data to data sources such as Medicare files or state or Social Security death databases introduces possible error or bias into the measurement and reporting of mortality.
Readmissions
Once hospitalized patients have recovered from a procedure or an acute illness, they are discharged home or to a facility to receive ongoing, less acute care. Readmission to the acute care hospital for a reason related to the first hospitalization is considered by some to be an indicator of inappropriate discharge, poor discharge planning, or insufficient care coordination (33, 34). Hospital readmission has therefore become a quality indicator tracked by payors and providers, and has become the target of numerous quality improvement initiatives (33). As with mortality, readmission is often tied to a fixed time interval, such as 7 or 30 days (35).
Although readmission is easily defined, its measurement and reporting may be problematic for several reasons. First, readmission is closely related to length of stay. The desire to release patients from the hospital faster to decrease length of stay (and risk of hospital-acquired infections) may inadvertently increase readmission risk if patients are discharged prematurely. Second, it is unclear whether readmission truly constitutes poor quality care - it may be the case for certain conditions that readmission and optimization is preferable to protracted outpatient management. Third, the optimal interval over which to measure readmissions is unclear.
Composite morbidity measures
When outcomes are rare (e.g. mortality after low-risk surgical procedures), morbidity and mortality may be combined to create a composite measure (36, 37). This approach confers statistical power to detect differences in outcomes between providers, but may obscure important differences in the individual conditions used to compile the measure. Consider a hypothetical composite measure that includes both mortality and acute renal failure. Provider A with a high perioperative mortality rate but low incidence of renal failure might appear to be the same as or better than Provider B, with low mortality rate but high rate of renal failure.
Conclusion
Outcomes measurement in healthcare is one tool that enables assessment and tracking of healthcare quality. Outcomes may be used to compare providers to each other, and they may be used to evaluate changes in quality over time at the provider, health system or population level. Risk adjustment is an important companion process that helps to contextualize the outcomes observed and provide some assurance that differences observed are related to healthcare quality. Ongoing challenges to the accurate measurement of outcomes and risk include limitations in data availability and the assessment of providers with low volumes, although advanced statistical techniques and greater computing power continue to lessen the impact of these limitations.
Anesthesiologists may wonder whether it is fair to be judged, rated or ranked based on the outcomes experienced by patients under their care. Although this may be a worthwhile topic of debate, it is clear that external pressures will require the measurement and reporting of patient outcomes. By understanding outcomes measurement and risk adjustment, anesthesiologists can position themselves to contribute to quality improvement methodology, improving the likelihood that those metrics that are collected reflect care quality as closely as possible. Also, when given a choice of metrics to measure and report, this understanding will help anesthesiologists select the ones that are most meaningful to them, their perioperative teams, and the patients for whom they care.
Footnotes
Conflicts of interest and sources of funding: Dr. Lane-Fall has received funding from the National Institutes of Health in the form of a training grant (Grant number 1T32HL098054-03, PI: David Asch MD MBA). She declares no conflicts of interest. Dr. Neuman has received funding from the Foundation for Anesthesia Education and Research and the National Institutes of Health (Grant number 1K08AG043548-01). He declares no conflicts of interest.
Contributor Information
Meghan B. Lane-Fall, Email: meghan.lane-fall@uphs.upenn.edu, Department of Anesthesiology and Critical Care, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, 680 Dulles, Philadelphia, PA 19104, Telephone: 215-573-7399; Facsimile: 215-662-7106.
Mark D. Neuman, Email: neumanm@mail.med.upenn.edu, Department of Anesthesiology and Critical Care, Perelman School of Medicine, University of Pennsylvania.
References
- 1.Iezzoni LI. Reasons for risk adjustment. In: Iezzoni LI, editor. Risk adjustment for measuring health care outcomes. 4. Chicago, Illinois: Health Administration Press; 2013. pp. 1–14. [Google Scholar]
- 2.Wright AJ. Early use of the cushing-codman anesthesia record. Anesthesiology. 1987;66(1):92. doi: 10.1097/00000542-198701000-00022. [DOI] [PubMed] [Google Scholar]
- 3.Finster M, Wood M. The apgar score has survived the test of time. Anesthesiology. 2005;102(4):855–857. doi: 10.1097/00000542-200504000-00022. [DOI] [PubMed] [Google Scholar]
- 4.Wennberg J, Gittelsohn A. Small area variations in health care delivery. A population based health information system can guide planning and regulatory decision making. Science. 1973;182(4117):1102–1108. doi: 10.1126/science.182.4117.1102. [DOI] [PubMed] [Google Scholar]
- 5.Fairclough DL. Patient reported outcomes as endpoints in medical research. Stat Methods Med Res. 2004;13(2):115–138. doi: 10.1191/0962280204sm357ra. [DOI] [PubMed] [Google Scholar]
- 6.Ahmed S, Berzon RA, Revicki DA, et al. The use of patient-reported outcomes (pro) within comparative effectiveness research: Implications for clinical practice and health care policy. Med Care. 2012;50(12):1060–1070. doi: 10.1097/MLR.0b013e318268aaff. [DOI] [PubMed] [Google Scholar]
- 7.Mitka M. Ratings game: Lists of “top” physicians, hospitals has unclear impact on public. JAMA. 2009;302(15):1636–1639. doi: 10.1001/jama.2009.1477. [DOI] [PubMed] [Google Scholar]
- 8.Pope DG. Reacting to rankings: Evidence from “america’s best hospitals”. J Health Econ. 2009;28(6):1154–1165. doi: 10.1016/j.jhealeco.2009.08.006. [DOI] [PubMed] [Google Scholar]
- 9.Dranove D, Kessler D, McClellan M, et al. Is more information better? The effects of report cards on health care providers. Journal of Political Economy. 2003;111(3):555–588. [Google Scholar]
- 10.Werner RM, Asch DA, Polsky D. Racial profiling: The unintended consequences of coronary artery bypass graft report cards. Circulation. 2005;111(10):1257–1263. doi: 10.1161/01.CIR.0000157729.59754.09. [DOI] [PubMed] [Google Scholar]
- 11.Werner RM, Kolstad JT, Stuart EA, et al. The effect of pay-for-performance in hospitals: Lessons for quality improvement. Health Aff (Millwood) 2011;30(4):690–698. doi: 10.1377/hlthaff.2010.1277. [DOI] [PubMed] [Google Scholar]
- 12.Macario A, Weinger M, Truong P, et al. Which clinical anesthesia outcomes are both common and important to avoid? The perspective of a panel of expert anesthesiologists. Anesth Analg. 1999;88(5):1085–1091. doi: 10.1097/00000539-199905000-00023. [DOI] [PubMed] [Google Scholar]
- 13.Iezzoni LI. Coded data from administrative sources. In: Iezzoni LI, editor. Risk adjustment for measuring health care outcomes. 4. Chicago, Illinois: Health Administration Press; 2013. pp. 95–146. [Google Scholar]
- 14.Dalton JE, Glance LG, Mascha EJ, et al. Impact of present-on-admission indicators on risk-adjusted hospital mortality measurement. Anesthesiology. 2013 doi: 10.1097/ALN.0b013e31828e12b3. [DOI] [PubMed] [Google Scholar]
- 15.Glance LG, Dick AW, Mukamel DB, et al. How well do hospital mortality rates reported in the new york state cabg report card predict subsequent hospital performance? Med Care. 2010;48(5):466–471. doi: 10.1097/MLR.0b013e3181d568f7. [DOI] [PubMed] [Google Scholar]
- 16.Hannan EL, Kilburn H, Jr, Racz M, et al. Improving the outcomes of coronary artery bypass surgery in new york state. Journal of the American Medical Association. 1994;271(10):761–766. [PubMed] [Google Scholar]
- 17.Lawson EH, Louie R, Zingmond DS, et al. A comparison of clinical registry versus administrative claims data for reporting of 30-day surgical complications. Ann Surg. 2012;256(6):973–981. doi: 10.1097/SLA.0b013e31826b4c4f. [DOI] [PubMed] [Google Scholar]
- 18.Ingraham AM, Richards KE, Hall BL, et al. Quality improvement in surgery: The american college of surgeons national surgical quality improvement program approach. 2010. pp. 251–267. [DOI] [PubMed] [Google Scholar]
- 19.Caceres M, Braud RL, Garrett HE., Jr A short history of the society of thoracic surgeons national cardiac database: Perceptions of a practicing surgeon. Ann Thorac Surg. 2010;89(1):332–339. doi: 10.1016/j.athoracsur.2009.09.045. [DOI] [PubMed] [Google Scholar]
- 20.Iezzoni LI. The risks of risk adjustment. Journal of the American Medical Association. 1997;278(19):1600–1607. doi: 10.1001/jama.278.19.1600. [DOI] [PubMed] [Google Scholar]
- 21.Saklad M. Grading of patients for surgical procedures. Anesthesiology. 1941;2(3):281–284. [Google Scholar]
- 22.Dripps RD, Lamont A, EEJ The role of anesthesia in surgical mortality. JAMA. 1961;178(3):261–266. doi: 10.1001/jama.1961.03040420001001. [DOI] [PubMed] [Google Scholar]
- 23.Lee TH, Marcantonio ER, Mangione CM, et al. Derivation and prospective validation of a simple index for prediction of cardiac risk of major noncardiac surgery. Circulation. 1999;100(10):1043–1049. doi: 10.1161/01.cir.100.10.1043. [DOI] [PubMed] [Google Scholar]
- 24.Shwartz M, Ash AS. Estimating the effect of an intervention from observational data. In: Iezzoni LI, editor. Risk adjustment for measuring health care outcomes. 4. Chicago, Illinois: Health Administration Press; 2013. pp. 301–334. [Google Scholar]
- 25.Breslow MJ, Badawi O. Severity scoring in the critically ill: Part 1 - interpretation and accuracy of outcome prediction scoring systems. Chest. 2012;141(1):245–252. doi: 10.1378/chest.11-0330. [DOI] [PubMed] [Google Scholar]
- 26.Turner PL, Ilano AG, Zhu Y, et al. Acs-nsqip criteria are associated with apache severity and outcomes in critically ill surgical patients. J Am Coll Surg. 2011;212(3):287–294. doi: 10.1016/j.jamcollsurg.2010.12.011. [DOI] [PubMed] [Google Scholar]
- 27.Ash AS, Shwartz M, Peköz EA, et al. Comparing outcomes across providers. In: Iezzoni LI, editor. Risk adjustment for measuring health care outcomes. 4. Chicago, Illinois: Health Administration Press; 2013. pp. 335–378. [Google Scholar]
- 28.Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: The problem with small sample size. Journal of the American Medical Association. 2004;292(7):847–851. doi: 10.1001/jama.292.7.847. [DOI] [PubMed] [Google Scholar]
- 29.Shwartz M, Ren J, Peköz EA, et al. Estimating a composite measure of hospital quality from the hospital compare database: Differences when using a bayesian hierarchical latent variable model versus denominator-based weights. Med Care. 2008;46(8):778–785. doi: 10.1097/MLR.0b013e31817893dc. [DOI] [PubMed] [Google Scholar]
- 30.Silber JH, Rosenbaum PR, Brachet TJ, et al. The hospital compare mortality model and the volume - outcome relationship. Health Serv Res. 2010;45(5 PART 1):1148–1167. doi: 10.1111/j.1475-6773.2010.01130.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Jencks SF, Williams DK, Kay TL. Assessing hospital-associated deaths from discharge data. The role of length of stay and comorbidities. Journal of the American Medical Association. 1988;260(15):2240–2246. [PubMed] [Google Scholar]
- 32.Kupfer JM. The morality of using mortality as a financial incentive: Unintended consequences and implications for acute hospital care. JAMA. 2013;309(21):2213–2214. doi: 10.1001/jama.2013.5009. [DOI] [PubMed] [Google Scholar]
- 33.Jweinat JJ. Hospital readmissions under the spotlight. J Healthc Manag. 2010;55(4):252–264. [PubMed] [Google Scholar]
- 34.Ludke RL, Booth BM, Lewis-Beck JA. Relationship between early readmission and hospital quality of care indicators. Inquiry. 1993;30(1):95–103. [PubMed] [Google Scholar]
- 35.Joynt KE, Jha AK. Thirty-day readmissions — truth and consequences. N Engl J Med. 2012;366(15):1366–1369. doi: 10.1056/NEJMp1201598. [DOI] [PubMed] [Google Scholar]
- 36.Organisation for Economic Co-operation and Development. Handbook on constructing composite indicators: Methodology and user guide. Paris: OECD Publications; 2008. [Google Scholar]
- 37.Couralet M, Guérin S, Le Vaillant M, et al. Constructing a composite quality score for the care of acute myocardial infarction patients at discharge: Impact on hospital ranking. Med Care. 2011;49(6):569–576. doi: 10.1097/MLR.0b013e31820fc386. [DOI] [PubMed] [Google Scholar]
- 38.Hannan EL, Kilburn H, Jr, FODJ, et al. Adult open heart surgery in new york state: An analysis of risk factors and hospital mortality rates. JAMA. 1990;264(21):2768–2774. [PubMed] [Google Scholar]
- 39.Hannan EL, Siu AL, Kumar D, et al. The decline in coronary artery bypass graft surgery mortality in new york state: The role of surgeon volume. Journal of the American Medical Association. 1995;273(3):209–213. [PubMed] [Google Scholar]
- 40.Mukamel DB, Mushlin AI. Quality of care information makes a difference: An analysis of market share and price changes after publication of the new york state cardiac surgery mortality reports. Med Care. 1998;36(7):945–954. doi: 10.1097/00005650-199807000-00002. [DOI] [PubMed] [Google Scholar]
- 41.Omoigui NA, Miller DP, Brown KJ, et al. Outmigration for coronary bypass surgery in an era of public dissemination of clinical outcomes. Circulation. 1996;93(1):27–33. doi: 10.1161/01.cir.93.1.27. [DOI] [PubMed] [Google Scholar]
- 42.Jha AK, Epstein AM. The predictive accuracy of the new york state coronary artery bypass surgery report-card system. Health Aff (Millwood) 2006;25(3):844–855. doi: 10.1377/hlthaff.25.3.844. [DOI] [PubMed] [Google Scholar]