Abstract
Under the Hospital-Acquired Condition Reduction Program (HACRP), introduced by the Affordable Care Act, the Centers for Medicare and Medicaid must reduce reimbursement by 1% for hospitals that rank among the lowest performing quartile in regard to hospital-acquired conditions (HACs). This study seeks to determine whether Accredited Cancer Program (ACP) hospitals (as defined by the American College of Surgeons) score differently on the HACRP metrics than nonaccredited cancer program hospitals. This study uses data from the 2014 American Hospital Association Annual Survey database, the 2014 Area Health Resource File, the 2014 Medicare Final Rule Standardizing File, and the FY2017 HACRP database (Medicare Hospital Compare Database). The association between ACPs, HACs, and market characteristics is assessed through multinomial logistic regression analysis. Odds ratios and 95% confidence intervals are reported. Accredited cancer hospitals have a greater risk of scoring in the Worse outcome category of HAC scores, vs Middle or Better outcomes, compared with nonaccredited cancer hospitals. Despite this, they do not have greater odds of incurring a payment reduction under the HACRP measurement system. While ACP hospitals can likely improve scores, questions concerning the consistency of the message between ACP hospital quality and HACRP quality need further evaluation to determine potential gaps or issues in the structure or measurement. ACP hospitals should seek to improve scores on domain 2 measures. Although ACP hospitals do likely see more complex patients, additional efforts to reduce surgical site infections and related HACs should be evaluated and incorporated into required quality improvement efforts. From a policy perspective, policy makers should carefully evaluate the measures utilized in the HACPR.
Keywords: Patient Protection and Affordable Care Act, iatrogenic disease, cancer care facilities, American Hospital Association, multinomial logistic regression
What do we already know about this topic?
Both the Hospital-Acquired Condition Reduction Program and the Accreditation of Cancer Program provide indication of the quality of care provided by hospitals.
How does your research contribute to the field?
We do not know whether both programs communicate similar messages regarding quality of care.
What are your research’s implications toward theory, practice, or policy?
The findings of this research indicate that a consistent message is not communicated and that there are opportunities for improvement in Accredited Cancer Program response to the Hospital-Acquired Condition Reduction Program, as well as a need to evaluate gaps between the quality indicators.
Introduction
Quality and value continue to drive discussions regarding the US health care system. In previous manifestations of health care policy, aspects of cost, quality, and access have all been key drivers to reform to a varying degree. Recently, these aspects have focused in greater detail on holding health care organizations accountable for providing quality of care while also attempting to curb the cost trajectories that have consistently risen over the past several decades.1 To accomplish this, policy makers and health care providers have sought ways to measure and report quality indicators.2-4
From the policy-maker perspective, pay-for-performance programs have been implemented, which have directed compensation efforts at reimbursing hospitals for services which provide value (ie, those that are assessed based on costs as well as patient outcomes, as opposed to traditional fee-for-service models).5 Centers for Medicare and Medicaid’s (CMS) Hospital-Acquired Condition Reduction Program (HACRP) is one such program and seeks to reduce reimbursement for organizations with poor patient safety performance. It does this by focusing on the rate at which patients receiving care within the facility acquire a condition that was not present upon admission and which is preventable through evidence-based guidelines. Hospital-Acquired Condition Reduction Program does not specify how the organization should attempt to reduce hospital-acquired conditions (HACs)—through what structure or processes—but does require that the hospitals control the number and rate of HACs. Furthermore, the organization’s HAC rate is also publicly available and provides indication as to the hospital’s quality.
From the hospital side, organizations have sought external validations of quality and value through certifications and accreditations. One such indicator is the Accredited Cancer Program (ACP) designation. To achieve the title, hospitals have to demonstrate structure and service requirements established by the American College of Surgeons’ Commission on Cancer (CoC). However, the awarding of ACP credentials is not dependent on the hospital’s outcomes. What is not clear is whether the 2 strategies to improve quality and value (Reimbursement Models and Accreditation) provide similar indications as to the organization’s standing. As such, the focus of this study is to evaluate the relationship between the distinctive structure and services of ACP hospitals and HACRP scores.
Background
Hospital-Acquired Condition Reduction Program
Under the HACRP, introduced by the Affordable Care Act in fiscal year 2015, the CMS must reduce reimbursement by 1% for hospitals that rank among the lowest performing quartile in regard to HACs.6 Within the HACRP, HACs are divided into 2 domains: Domain 1 contains the Agency for Healthcare Research and Quality (AHRQ) composite Patient Safety Indicators (PSI) 90 scores; domain 2 contains the Centers for Disease Control and Prevention’s (CDC) National Healthcare Safety Network (NHSN) measures and is the average of the points accrued for each standardized infection ratio for central line–associated bloodstream infection (CLABSI), catheter-associated urinary tract infection (CAUTI), surgical site infection (SSI), methicillin-resistant Staphylococcus aureus (MRSA), and Clostridium difficile infection (CDI) (refer to Table 1 for a list of measures and their associated score). Hospitals score on a range of 1 to 10 points on each measure based on their national percentile ranking in that category. For this scale, a score of 1 indicates good performance (ie, few to no HACs) and a score of 10 indicates poor performance (ie, many more HACs than comparators). As domain 2 contains more fields, it is weighted as 85% of the total score (see Table 1).7
Table 1.
Domain 1 |
Domain 2 |
||
---|---|---|---|
AHRQ PSI 90 measure | Scores 1-10 | CDC NHSN measures | Average score 1-10 |
PSI 3 Pressure ulcer rate | CLABSI SIR rate | 1-10 | |
PSI 6 Iatrogenic pneumothorax rate | CAUTI SIR rate | 1-10 | |
PSI 7 Central venous catheter–related bloodstream infection rate | Pooled surgical site infection (SSI) SIR | 1-10 | |
PSI 8 Postoperative hip fracture rate | Methicillin-resistant Staphylococcus aureus (MRSA) | 1-10 | |
PSI 12 Postoperative pulmonary embolism (PE) or deep vein thrombosis (DVT) rate | Clostridium difficile infection (CDI) | 1-10 | |
PSI 13 Postoperative sepsis rate | |||
PSI 14 Wound dehiscence rate | |||
PSI 15 Accidental puncture and laceration rate |
Note. HACRP = Hospital-Acquired Condition Reduction Program; AHRQ = Agency for Healthcare Research and Quality; PSI = Patient Safety Indicators; CDC = Centers for Disease Control and Prevention; NHSN = National Healthcare Safety Network; CLABSI = central line–associated bloodstream infection; CAUTI = catheter-associated urinary tract infection; SIR = standardized infection ration.
Accredited Cancer Programs
The American College of Surgeons established the CoC to create standards against which to compare the performance of cancer clinics.8 A hospital seeking accreditation must fulfill structure and services requirements in addition to clinical services.8 According to Bilimoria and colleagues, programs that were CoC-accredited were also more likely to be accredited by other national quality organizations and more likely to provide medical education than non-CoC programs.9 Multiple accreditations from independent sources and the medical education status of ACPs serve to provide indication to the public concerning the quality of ACP hospitals compared with nonteaching or nonaccredited hospitals.10,11 In addition, due to CoC requirements, ACP hospitals are more likely to use a multidisciplinary approach to patient care, with improved communication and teamwork known to be associated with improved overall patient outcomes.12,13
Both the HACRP and ACP accreditation are important in assessing and reporting quality of care. While the HACRP acts to indicate and improve quality through HAC scoring and the risk of penalty, the title of ACP demonstrates an achievement in the quality of operations and procedures. As such, one would expect that both would present a consistent message concerning the quality of the organization.
Conceptual Framework
Donabedian’s 3-part assessment of the quality of care serves as a logical approach for hypothesizing how accreditation might influence outcomes. Donabedian’s model consists of structure (characteristics of the setting of care), process (the activities in carrying out care), and outcome (the effect of care on patients).14,15 These components are interrelated, in that good structures lead to good processes, which in turn lead to good outcomes.14
To earn the certification of an ACP, a hospital must fulfill criteria to indicate it has implemented and maintain required quality components of its structure, which aligns with the first part of Donabedian’s model. One of the CoC’s 5 structural requirements for ACP hospitals is a cancer committee authority. The hospital’s multidisciplinary committee is “responsible for goal setting, planning, initiating, implementing, evaluating, and improving all cancer-related activities in the program.”8 (p17) By Donabedian’s definition, the committee is a component of the hospital’s structure because it characterizes the setting; ACP hospitals have cancer committees that establish bylaws for the operations of their cancer programs, whereas non-ACP hospitals are likely not to have such a committee.
The ACP-aspiring hospital next must demonstrate criteria that indicate provision of the services necessary for quality cancer care. As an example, the CoC elaborates on the requirements of the systemic therapy services requirement: “A standardized approach to the administration of systemic therapy creates opportunities to monitor, evaluate, and improve the safety of the administration process. [Policies and procedures] are in place to guide the safe handling and administration of systemic therapy.”8 (p23) Donabedian’s definition of process aligns with the services component of the CoC’s assessment as the activities in carrying out care.
Applying the structure-process-outcome relationship to our example, a hospital’s cancer committee would influence its systemic therapy services. The cancer committee would decide whether the hospital would offer services on-site, through contract, or through physician clinics, as well as policies for safe administration of treatment. The structure that the committee establishes dictates the activities that will follow in the process of care. In addition, how well the committee delineates its policies influences how well processes will be carried out. Poorly planned, implemented, or evaluated committee policies could result in noncompliance with regulations for the safe handling of systemic therapy; good (or bad) structure leads to good (or bad) process.
Demonstration of suitable structure and process is necessary to earn ACP status. The final, and arguably the most critical, component of Donabedian’s model can be evaluated based on HACs in that an adverse outcome is the result of problems with structure and/or process. Applying the third component to our example, a failure of the cancer committee to recommend appropriate systemic therapy can result in negative patient outcomes. Inadequately trained nursing staff administering the systemic therapy may lead to otherwise preventable infections.
Hospitals within the HACRPs aim to lower their HACs and thus reduce negative patient outcomes. As ACP hospitals have already proven satisfactory structures and processes, it can be anticipated under Donabedian’s model that ACP hospitals will also have Better outcomes than non-ACP hospitals, thus scoring Better on HACRP metrics than non-ACP hospitals.
We can further develop this reasoning into the following hypotheses:
Hypothesis 1 (H1): ACP hospitals will have lower domain 1 scores than non-ACP hospitals.
Hypothesis 2 (H2): ACP hospitals will have lower domain 2 scores than non-ACP hospitals.
Hypothesis 3 (H3): ACP hospitals will have better overall HACRP total scores than non-ACP hospitals.
Methods
This study uses data from the 2014-2015 American Hospital Association (AHA) Annual Survey database, the 2014 Area Health Resource File (AHRF), and the FY2017 HACRP database (Medicare Hospital Compare Database). The AHA database contains annual survey data collected from more than 6000 US hospitals and focuses on hospital characteristics, services, and functions.16 The AHRF database provides health resource data and socioeconomic indicators at the county level.17 Finally, the HACRP database contains the overall scores and individual weighted scores for hospitals participating in the program on the following measures: AHRQ PSI 90 Composites for Medicare fee-for-service claims for discharges between July 1, 2013, and June 30, 2015, and the surgical infection rates reported by the CDC occurring from January 1, 2014, to December 31, 2015.18
Dependent Variables
This study reviews 3 dependent variables that construct the HACPR, which include the total HAC, domain 1, and domain 2 scores (refer to Table 1 for a list of measures and their associated score). Hospitals score 1 to 10 points on each measure (domain 1, domain 2, total HAC).7 For this analysis, the 2 domains and the total score are divided into thirds. The Better outcome group contains scores from 0 to 3.33, the Middle outcome group contains score from 3.34 to 6.66, and the Worse outcome group contains score from 6.67 to 10.
Independent Variables
The main independent variable in this study focuses on the organization’s status as an ACP as defined by the American College of Surgeons.8 Accredited Cancer Program is operationalized as a binary variable, where 0 indicates no accreditation and 1 indicates accreditation.
Control Variables
To control for bias due to differing organizational structures and characteristics, we have used the following variables: organizational size, system membership, ownership, rurality, teaching status, percentage of the hospitals with Medicare and Medicaid population, region in which the organization is located, the Herfindahl-Hirschman Index (HHI), aging population in Health Service Area (HSA), percentage without insurance, and percentage in poverty. Organizational size is reported as a categorical variable (less than 100, 100-199, and greater than 200 staffed beds) and provides indication of hospital quality and resources.19 Ownership is reported as for-profit, federal government, and not-for-profit and is utilized to provide indication of financial and quality performance.20 Rurality, system status, and teaching status are all reported as binary variables, where 0 indicates urban, not a part of a system, or nonteaching. System status indicates whether the organization is part of a larger system and provides indication as to the resources available to the organization.21 Teaching hospitals are reported to have higher safety scores than nonteaching hospitals.22 Rurality provides indication of location, and previous studies have demonstrated that quality differences occur between rural and urban locations.23,24 Case mix index provides indication of the disease severity of the hospital and is reported as a continuous variable. The percentage of Medicare and Medicaid population is reported as a continuous variable and provides indication of the financial health of the organization.25,26 The region (West, Midwest, South, and Northeast) will further describe hospital location, as this is a typical control variable of quality studies and is defined by State location within the United States.27-29
In addition, market characteristics are important to evaluate within this context to better define the likely population and market pressures placed upon individual organizations. As such, the following variables are evaluated at the HSA level.30 Health Service Areas consist of single or multiple counties that have been identified as hospital service areas. The HHI is a measure of competition within the HSA market. This measure is important in this study because quality indicator reporting creates the opportunity for patients to seek care from other facilities which may offer better care in environments where those facilities are available.31,32 The HHI is a continuous variable ranging between 0 (pure competition) and 1 (pure monopoly). The HSA percentage without insurance, percentage in poverty, and percentage of the population aged 65 years or older are measured as continuous variables and provide indication of the health of the population and availability of resources for the hospitals within the analysis.33
Analysis
The associations between ACPs and scores on domain 1 (H1), domain 2 (H2) and the total HAC score (H3) are assessed through multinomial logistic regression analysis.34 Pairwise deletion was used for missing data and the data set was reviewed for extreme values that might bias the analysis. STATA 14 was used to run all analyses, and models were estimated though maximum likelihood. Relative risk ratios, standard errors, and 95% confidence intervals (CIs) are reported.
Results
As a result of merging multiple data sets and the limits associated with hospitals participating in the HACPR program, the data set covers 968 ACP hospitals and 1284 non-ACP hospitals across the United States. Descriptive statistics comparing ACP and non-ACP programs are found in Table 2.
Table 2.
Continuous variables | Accredited Cancer Program |
Nonaccredited cancer program |
||||
---|---|---|---|---|---|---|
M | SD | Population | M | SD | Population | |
HAC total score | 5.00 | 2.02 | 968 | 5.76 | 1.70 | 1284 |
HAC dimension 1 score | 5.51 | 3.19 | 945 | 5.30 | 2.62 | 1271 |
HAC dimension 2 score | 4.49 | 2.34 | 967 | 5.80 | 1.51 | 1227 |
Case mix index | 1.60 | 0.23 | 968 | 1.42 | 0.39 | 1284 |
Hospital Medicare percentage | 0.51 | 0.12 | 968 | 0.51 | 0.16 | 1284 |
Hospital Medicaid percentage | 0.20 | 0.10 | 968 | 0.18 | 0.14 | 1284 |
Herfindahl-Hirschman Index (HHI) | 0.48 | 0.39 | 968 | 0.63 | 0.41 | 1284 |
Health Service Area aging population (%) | 11.82 | 5.53 | 968 | 13.10 | 6.00 | 1284 |
Health Service Area without insurance (%) | 12.20 | 5.77 | 968 | 10.10 | 5.07 | 1284 |
Health Service Area in poverty (%) | 0.12 | 0.06 | 968 | 0.14 | 0.07 | 1284 |
Categorical variables | Percentage (n = 968) | Population | Percentage (n = 1284) | Population | ||
Size | ||||||
Small | 8 | 82 | 54 | 696 | ||
Medium | 27 | 262 | 28 | 356 | ||
Large | 64 | 624 | 18 | 232 | ||
System hospital | ||||||
No | 21 | 202 | 35 | 448 | ||
Yes | 79 | 766 | 65 | 836 | ||
Ownership | ||||||
Government (nonfederal) | 10 | 99 | 17 | 218 | ||
For-profit | 13 | 123 | 34 | 439 | ||
Not-for-profit | 77 | 746 | 49 | 627 | ||
Rurality | ||||||
Urban | 89 | 861 | 62 | 800 | ||
Rural | 11 | 107 | 38 | 484 | ||
Teaching status | ||||||
Nonteaching | 31 | 299 | 69 | 880 | ||
Teaching hospital | 69 | 669 | 31 | 404 | ||
Region | ||||||
Northeast | 20 | 198 | 14 | 178 | ||
Midwest | 32 | 312 | 24 | 303 | ||
South | 40 | 389 | 54 | 699 | ||
West | 7 | 69 | 8 | 104 |
Note. HAC = hospital-acquired condition.
Multinomial logistic regression was used to calculate relative risk for ACPs vs non-ACPs looking at total HAC scores, dimension 1 scores, and dimensions 2 scores. The HAC scores were split into thirds: Better outcomes (0-3.33), Middle outcomes (3.34-6.56), and Worse outcomes (6.57-10). Table 3 reports the regression model comparing Better to Middle and Worse HAC scores.
Table 3.
HAC total score (overall) (n = 2252) |
HAC dimension 1 (AHRQ PSI 90 measure) (n = 2216) |
HAC dimensions 2 (CDC NHSN measures) (n = 2194) |
||||
---|---|---|---|---|---|---|
Relative risk ratio (SE) | 95% confidence interval | Relative risk ratio (SE) | 95% confidence interval | Relative risk ratio (SE) | 95% confidence interval | |
HAC score of 0-3.33 (Better) | Reference | |||||
HAC score of 3.34-6.66 (Middle) | ||||||
Accredited Cancer Program (no is reference) | ||||||
Yes | 2.85 (0.55) | (1.95-4.18) | 0.62 (0.09) | (0.47-0.82) | 2.43 (0.44) | (1.71-3.46) |
Organizational size (small is reference) | ||||||
Medium | 2.64 (0.43) | (1.92-3.63) | 0.59 (0.09) | (0.44-0.80) | 2.71 (0.42) | (2.00-3.68) |
Large | 6.14 (1.52) | (3.78-9.98) | 0.65 (0.12) | (0.45-0.94) | 7.83 (1.89) | (4.87-12.57) |
System member (no is reference) | ||||||
Yes | 1.57 (0.22) | (1.19-2.08) | 0.78 (0.11) | (0.6-1.01) | 1.64 (0.24) | (1.24-2.18) |
Ownership (government nonfederal is reference) | ||||||
For-profit | 0.77 (0.17) | (0.5-1.19) | 0.8 (0.17) | (0.53-1.21) | 0.74 (0.16) | (0.48-1.14) |
Not-for-profit | 1.01 (0.2) | (0.69-1.49) | 0.92 (0.18) | (0.63-1.34) | 1.24 (0.24) | (0.84-1.82) |
Rurality (urban is reference) | ||||||
Rural | 0.92 (0.15) | (0.67-1.26) | 1.03 (0.17) | (0.75-1.41) | 0.76 (0.12) | (0.56-1.04) |
Teaching status (nonteaching is reference) | ||||||
Teaching hospital | 1.29 (0.2) | (0.95-1.75) | 1.02 (0.14) | (0.78-1.32) | 1.33 (0.2) | (0.99-1.79) |
Case mix index | 0.94 (0.19) | (0.64-1.39) | 0.38 (0.08) | (0.25-0.57) | 1.02 (0.22) | (0.67-1.56) |
Medicare rate | 3.89 (1.79) | (1.58-9.58) | 0.96 (0.45) | (0.38-2.42) | 2.73 (1.3) | (1.08-6.93) |
Medicaid rate | 2.18 (1.45) | (0.59-8.03) | 0.84 (0.55) | (0.23-3.03) | 2.13 (1.42) | (0.57-7.9) |
Region (Northeast is reference) | ||||||
Midwest | 0.69 (0.16) | (0.43-1.09) | 1.23 (0.23) | (0.85-1.77) | 0.78 (0.17) | (0.5-1.2) |
South | 0.71 (0.18) | (0.44-1.16) | 1.35 (0.27) | (0.91-1.99) | 0.75 (0.18) | (0.47-1.19) |
West | 0.98 (0.31) | (0.52-1.84) | 1.76 (0.49) | (1.02-3.05) | 1.07 (0.33) | (0.58-1.96) |
Herfindahl-Hirschman Index (HHI) | 0.69 (0.16) | (0.44-1.09) | 1.03 (0.2) | (0.7-1.51) | 0.79 (0.18) | (0.5-1.23) |
Health Service Area aging population (%) | 1.02 (0.02) | (0.98-1.05) | 1 (0.01) | (0.98-1.03) | 1.01 (0.02) | (0.98-1.05) |
Health Service Area without insurance (%) | 1.04 (0.02) | (1.00-1.08) | 0.97 (0.02) | (0.94-1.01) | 1.03 (0.02) | (0.99-1.07) |
Health Service Area in poverty (%) | 0.2 (0.3) | (0.01-3.91) | 0.53 (0.76) | (0.03-8.79) | 0.6 (0.91) | (0.03-11.78) |
HAC score of 0-3.33 (Better) | Reference | |||||
HAC score of 6.67-10 (Worse) | ||||||
Accredited Cancer Program (no is reference) | ||||||
Yes | 2.97 (0.65) | (1.94-4.56) | 0.81 (0.11) | (0.63-1.04) | 2.69 (0.54) | (1.81-3.98) |
Organizational size (small is reference) | ||||||
Medium | 1.99 (0.42) | (1.32-3.01) | 0.89 (0.14) | (0.66-1.21) | 2.18 (0.43) | (1.48-3.22) |
Large | 6.66 (1.86) | (3.85-11.52) | 1.14 (0.2) | (0.8-1.61) | 9.47 (2.56) | (5.57-16.09) |
System member (no is reference) | ||||||
Yes | 1.15 (0.2) | (0.82-1.61) | 1.02 (0.14) | (0.79-1.32) | 1.31 (0.22) | (0.94-1.83) |
Ownership (government nonfederal is reference) | ||||||
For Profit | 0.6 (0.16) | (0.35-1.02) | 0.49 (0.1) | (0.33-0.73) | 0.64 (0.17) | (0.38-1.07) |
Not-For-Profit | 0.99 (0.24) | (0.62-1.58) | 0.71 (0.13) | (0.49-1.01) | 1.16 (0.27) | (0.74-1.83) |
Rurality (urban is reference) | ||||||
Rural | 0.8 (0.17) | (0.52-1.22) | 1.1 (0.18) | (0.8-1.51) | 0.8 (0.16) | (0.53-1.19) |
Teaching status (nonteaching is reference) | ||||||
Teaching Hospital | 1.42 (0.26) | (0.99-2.03) | 1.4 (0.18) | (1.1-1.8) | 1.19 (0.21) | (0.84-1.68) |
Case mix index | 1.25 (0.31) | (0.78-2.02) | 0.75 (0.15) | (0.51-1.11) | 1.16 (0.3) | (0.7-1.92) |
Medicare rate | 3.68 (2.26) | (1.11-12.25) | 0.7 (0.34) | (0.27-1.81) | 4.01 (2.45) | (1.21-13.28) |
Medicaid rate | 4.31 (3.5) | (0.88-21.15) | 6.08 (3.74) | (1.82-20.33) | 2.2 (1.78) | (0.45-10.73) |
Region (Northeast is reference) | ||||||
Midwest | 0.38 (0.1) | (0.23-0.64) | 0.98 (0.16) | (0.71-1.35) | 0.52 (0.13) | (0.32-0.85) |
South | 0.55 (0.15) | (0.32-0.96) | 0.86 (0.15) | (0.61-1.23) | 0.68 (0.18) | (0.41-1.15) |
West | 0.81 (0.3) | (0.39-1.65) | 1.7 (0.43) | (1.04-2.78) | 1.03 (0.36) | (0.52-2.05) |
Herfindahl-Hirschman Index (HHI) | 0.40 (0.11) | (0.23-0.67) | 0.78 (0.14) | (0.55-1.1) | 0.39 (0.1) | (0.24-0.66) |
Health Service Area aging population (%) | 1.01 (0.02) | (0.97-1.05) | 1.01 (0.01) | (0.98-1.04) | 1.02 (0.02) | (0.98-1.06) |
Health Service Area without insurance (%) | 1.03 (0.02) | (0.99-1.08) | 0.99 (0.02) | (0.96-1.03) | 1.03 (0.02) | (0.99-1.08) |
Health Service Area in poverty (%) | 0.44 (0.83) | (0.01-18.07) | 0.23 (0.33) | (0.02-3.57) | 0.68 (1.25) | (0.02-25.28) |
Note. HAC = hospital-acquired condition; AHRQ = Agency for Healthcare Research and Quality; PSI = Patient Safety Indicators; NHSN = National Healthcare Safety Network; CDC = Centers for Disease Control and Prevention.
Overall HAC Scores
With regard to the categorical variables, ACP hospitals have 2.85 times (95% CI = 1.95-4.18) the risk of scoring in the Middle vs Better range and 2.97 times the risk (95% CI = 1.94-4.56) of scoring in the Worse vs Better outcome range than are non-ACP hospitals with regard to the HAC overall score. Organizational size proved to be significant for both score ranges; medium-sized hospitals have 2.64 (95% CI = 1.92-3.63) times the risk of scoring in the Middle vs Better outcome range and 1.99 (95% CI = 1.32-3.01) times the risk of scoring in the Worse vs Better outcome range compared with small hospitals, whereas large hospitals have 6.14 (95% CI = 3.78-9.98) times the risk of scoring in the Middle vs Better range and 6.66 (95% CI = 3.85-11.52) times the risk of scoring in the Worse vs Better range compared with small hospitals. System member hospitals have 1.57 (95% CI = 1.19-2.08) times the risk of scoring in the Middle vs Better range than nonsystem members. For-profit hospitals have 0.61 (95% CI = 0.36-1.04) times the risk (ie, reduced risk) of scoring in the Worse vs Better outcome range than nonfederal government hospitals.
In addition, for every 1 unit increase in Medicare rate, the risk of scoring in the Middle vs Better outcome range increases by 3.89 (95% CI = 1.58-9.58) and the risk of scoring in the Worse vs Better range is 3.68 (95% CI = 1.11-12.25). Hospitals in the Midwest as well as hospitals in the South have 0.38 (95% CI = 0.32-0.64) times and 0.55 (95% CI = 0.32-0.96) times the risk of scoring in the Worse vs Better outcome range compared with the Northeast, respectively. Similarly, hospitals in more competitive environment have 0.40 (95% CI = 0.23-0.67) times less risk to score in the Worse vs Better outcome range.
Domain 1 Scores
The ACP hospitals have 0.62 (95% CI = 0.47-0.82) times less risk of scoring in the Middle vs Better outcome range and 0.81 times less risk (95% CI = 0.63-1.04) to score in the Worse vs Better outcome range than non-ACP hospitals with regard to the HAC dimension 1 score. Medium hospitals have 0.59 times less risk (95% CI = 0.44-0.80) to score in the Middle vs Better range, whereas large hospitals have 0.58 times less risk (95% CI = 0.43-0.79) to score in the Middle vs Better outcome range compared with small hospitals. For-profit hospitals have 0.49 times less risk (95% CI = 0.33-0.73) to score in the Worse vs Better outcome range than government nonfederal hospitals, whereas teaching hospitals have 1.39 times the risk (95% CI = 1.09-1.78) to score in the Worse vs Better outcome range than nonteaching hospitals.
In addition, for every 1 unit increase in the case mix index, there is an associated 0.38 decrease in risk (95% CI = 0.25-0.57) of scoring in the Middle vs Better outcome range. There is also a 6.08 times increased risk of scoring in the Worse vs Better range (95% CI = 1.82-20.33) for everyone 1 unit increase in the Medicaid rate. Finally, hospitals located in the West have 1.76 times the risk (95% CI = 1.02-3.05) to score in the Middle vs Better outcome range and 1.70 times (95% CI = 1.04-2.78) the risk to score in the Worse vs Better outcome range than hospitals in the Northeast.
Domain 2 Scores
The ACP hospitals have 2.43 times greater risk (95% CI = 1.71-3.46) to score in the Middle vs Better outcome range and 2.69 times greater risk (95% CI = 1.81-3.98) to score in the Worse vs Better outcome range than are non-ACP hospitals with regard to the HAC dimension 2 score. Organizational size proved to be significant for both score ranges; medium hospitals have 2.71 times the risk (95% CI = 2.0-3.68) to score in the Middle vs Better outcomes range and 2.18 times the risk (95% CI = 1.48-3.22) to score in the Worse vs Better outcomes range compared with small hospitals. Large hospitals have 7.83 times the risk (95% CI = 4.87-12.57) to score in the Middle vs Better outcomes range and 9.47 times the risk (95% CI = 5.57-16.09) to score in the Worse vs Better outcomes range compared to small hospitals. System member hospitals are at 1.64 times the risk (95% CI = 1.24-2.18) of scoring in the Middle vs Better outcomes range than nonsystem members. For-profit hospitals have 0.65 times less risk (95% CI = 0.39-1.08) to score in the Worse vs Better outcomes range than government nonfederal hospitals.
Finally, for every 1 unit increase in Medicare rate, the risk of scoring in the Middle vs Better outcomes range is 2.73 (95% CI = 1.08-6.93) and the risk of scoring in the Worse vs Better outcomes range is 4.01 (95% CI = 11.21-13.28). In addition, hospitals located in the West have 0.52 times the risk (95% CI = 0.32-0.85) of scoring in the Worse vs Better outcomes range than hospitals in the Northeast. Hospitals in a more competitive environment have 0.39 times less risk (95% CI = 0.24-0.66) to score in the Worse vs Better outcomes range.
Discussion
The aim of American College of Surgeons’ CoC Accreditation of Cancer Programs is to designate organizations which are providing high-quality, patient-centered cancer care delivered in a multidisciplinary setting.35 This study finds that this is not necessarily the case. The ACP hospitals have a lower risk of scoring poorly than non-ACP hospitals in domain 1 when comparing Better vs Middle outcomes scores. However, Better vs Worse scores are not statistically different, thus providing partial support for hypothesis 1 which indicated that ACP hospitals will have lower domain 1 scores than non-ACP hospitals. Hypothesis 2, which indicated that ACP hospitals will have lower domain 2 scores than non-ACP hospitals, is not supported in this study, and the results indicate that ACP hospitals are at increased risk of scoring poorly in domain 2. In addition, ACP hospitals have a greater risk of achieving worse scores in the overall HAC measures than non-ACP hospitals; thus hypothesis 3, which indicated that ACP hospitals will have better overall HACRP total scores than non-ACP hospitals, is also not supported. Although outcomes such as HACs are not necessarily the main focus of the ACP certification, the improved processes and structures necessary to become an ACP center should promote improved outcomes such as fewer HACs. Thus, in this specific example, we see that ACP designation does not necessarily mean better patient outcomes, at least from an HACRP perspective.
There are multiple explanations why the relationships found in this study may exist. First, the accreditation standards from the CoC may not align well with reducing HACs or accreditation standards may not influence HAC outcomes. In response to the first, the CoC provides a list of standards to be met, which includes staff credentialing, quality improvement monitoring and improvements, monitoring evidence-based guidelines, committee oversight, public reporting, and a multitude of other requirements.8 In addition, CMS focuses on HACs because they are believed to be avoidable complications. As such, implementation of evidence-based structures and care processes included in the CoC ACP certification should correlate with a reduction in the number of HACs. In Alkhenizan and Shaw’s study on accreditation and quality, the authors evaluate acute myocardial infarctions, trauma, ambulatory surgical care, and infection control and pain management and find that accreditations considerably elevate the process of care by improving the structure and organization of health care facilities.36 In addition, Shaw and colleagues also find that accreditation and certification are positively correlated with clinical leadership, structures, and processes that support patient safety.37 As such, it would seem unlikely that either the CoC standards do not align well or that accreditation standards lack influence on outcomes.
The second, more likely scenario revolves around how HACs are measured and the risk adjustment that occurs during the HACRP scoring process. When reviewing the data presented in this study, it is noteworthy that both domain 1 and domain 2 do not provide the same relationship with ACP hospitals. Previous inquiry has indicated that the PSI and HAC measures used in the HACRP have limited validity compared with medical record reviews.38 In fact, the authors indicate that only one of the measures utilized met the validity threshold for the study. Furthermore, when considering issues such as HAC, administrative data may be poorly equipped to capture the breadth and depth of the information desired.39,40 In addition, organizations that are attempting to achieve greater levels of quality and accountability through accreditation may place themselves at greater risk of low performance in programs such as HACRP as they may be better able to either collect or code medical record data into administrative data sets. Finally, there may also be differences due to the complexity of the patients who are cared for in ACP vs non-ACP Hospitals and a need for better risk adjustment when considering HACs.41 The issues of risk adjustment may be further compounded given the above-mentioned issues with the validity of the PSI and HAC measures as well as the complications of using administrative data.
Limitations
This research utilizes a cross-sectional perspective which limits the ability to understand trends or other nuances of the data. In addition, the data utilized for this study are collected from a number of data sets, which does allow for general assertions concerning the markets and individual characteristics of the hospitals across the United States. However, the process of merging multiple data sets reduces the overall number of organizations retained for the analysis and increases the likelihood of missing or incomplete data to bias the results. Furthermore, the aggregate nature of the data limits more specific understanding and control for organizational performance on HAC measures. However, as the HAC scores are currently being used as indication of quality, the methods and rationale for including these indicators are justified.
Practical Implications
Quality and value assessments defined through payment mechanisms as well as accreditation or certifications are likely to continue. As such, we need to better understand how these 2 views of quality relate and differ. This study demonstrates that the message conveyed by each is not consistent. There are opportunities for improvement in several ways. First, from a practice standpoint, ACP hospitals should seek to improve scores on domain 2 measures. Although ACP hospitals do likely see more complex patients, additional efforts to reduce SSI and related HACs should be evaluated and incorporated into required quality improvement efforts. From a policy perspective, policy makers should carefully evaluate the measures utilized in the HACPR. The implementation on policies designed to influence practice through reimbursement or payment reductions provides the opportunity for quality gains to be made, however; these financial incentives should include adequate risk adjustment, as well as measurement that aligns closely with validated standards of care. In this instance, the lack of association between accreditation and scores related to hospital-acquired infections provides an opportunity to better define why such a gap has occurred and to more closely evaluate the mechanisms for defining scores associated with the care provided.
Footnotes
Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported in part by the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery.
ORCID iD: Aaron Spaulding https://orcid.org/0000-0001-9727-6756
References
- 1. Orszag PR, Emanuel EJ. Health care reform and cost control. N Engl J Med. 2010;363(7):601-603. [DOI] [PubMed] [Google Scholar]
- 2. Halpin LS, Barnett SD, Henry LL, Choi E, Ad N. Public health reporting: the United States perspective. Semin Cardiothorac Vasc Anesth. 2008;12(3):191-202. [DOI] [PubMed] [Google Scholar]
- 3. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001. [PubMed] [Google Scholar]
- 4. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356(5):486-496. [DOI] [PubMed] [Google Scholar]
- 5. Conrad DA. The theory of value-based payment incentives and their application to health care. Health Serv Res. 2015;50:2057-2089. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Lake Superior Quality Innovation Network. Understanding the Hospital-Acquired Condition Reduction Program, 2017. https://www.lsqin.org/wp-content/uploads/2017/12/HAC-fact-sheet.pdf.
- 7. Centers for Medicare & Medicaid. Key Information- Hospital-Acquired Condition (HAC) Reduction Program. CMS.gov; 2016.
- 8. Commission on Cancer Cancer Program Standards: Ensuring Patient-Centered Care. Chicago, IL: American College of Surgeons; 2016. [Google Scholar]
- 9. Bilimoria KY, Bentrem DJ, Stewart AK, Winchester DP, Ko CY. Comparison of commission on cancer-approved and -nonapproved hospitals in the United States: implications for studies that use the National Cancer Data Base. J Clin Oncol. 2009;27(25):4177-4181. [DOI] [PubMed] [Google Scholar]
- 10. Zimmerman JE, Shortell SM, Knaus WA, et al. Value and cost of teaching hospitals: a prospective, multicenter, inception cohort study. Crit Care Med. 1993;21(10):1432-1442. [DOI] [PubMed] [Google Scholar]
- 11. Ayanian JZ, Weissman JS. Teaching hospitals and quality of care: a review of the literature. Milbank Q. 2002;80(3):569-593. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Griggs JJ. Role of nonclinical factors in the receipt of high-quality systemic adjuvant breast cancer treatment. J Clin Oncol. 2012;30(2):121-124. [DOI] [PubMed] [Google Scholar]
- 13. Epstein NE. Multidisciplinary in-hospital teams improve patient outcomes: a review. Surg Neurol Int. 2014;5(suppl 7):S295-S303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Donabedian A. The quality of care: how can it be assessed? JAMA. 1988;260(12):1743-1748. [DOI] [PubMed] [Google Scholar]
- 15. Donabedian A. An Introduction to Quality Assurance in Health Care. New York, NY: Oxford University Press; 2003. [Google Scholar]
- 16. American Hospital Association. About. http://www.ahadataviewer.com/about/. Published 2015. Accessed November 12, 2015.
- 17. Health And Human Services. Area Health Resource Files (AHRF). http://ahrf.hrsa.gov/overview.htm. Published 2014. Accessed November 12, 2014.
- 18. Centers for Medicare & Medicaid. Hospital-Acquired Condition Reduction Program Fiscal Year 2017 Fact Sheet. https://www.cms.gov/medicare/medicare-fee-for-service-payment/acuteinpatientpps/HAC-reduction-program.html. Published 2017. Accessed August 2017.
- 19. Sosunov EA, Egorova NN, Lin H-M, et al. The impact of hospital size on CMS hospital profiling. Med Care. 2016;54(4):373-379. [DOI] [PubMed] [Google Scholar]
- 20. McKay NL, Deily ME. Comparing high-and low-performing hospitals using risk-adjusted excess mortality and cost inefficiency. Health Serv Res. 2005;30(4):347-360. [DOI] [PubMed] [Google Scholar]
- 21. McCue M, Diana ML. Assessing the performance of freestanding hospitals. J Healthc Manag. 2007;52(5):299-307. [PubMed] [Google Scholar]
- 22. Shahian DM, Nordberg P, Meyer GS, et al. Contemporary performance of U.S. teaching and nonteaching hospitals. Acad Med. 2012;87(6):701-708. [DOI] [PubMed] [Google Scholar]
- 23. Goldman LE, Dudley RA. United States rural hospital quality in the Hospital Compare database—accounting for hospital characteristics. Health Policy. 2008;87(1):112-127. [DOI] [PubMed] [Google Scholar]
- 24. Lutfiyya MN, Bhat DK, Gandhi SR, Nguyen C, Weidenbacher-Hoper VL, Lipsky MS. A comparison of quality of care indicators in urban acute care hospitals and rural critical access hospitals in the United States. Int J Qual Health Care. 2007;19(3):141-149. [DOI] [PubMed] [Google Scholar]
- 25. Bazzoli GJ, Chen H-F, Zhao M, Lindrooth RC. Hospital financial condition and the quality of patient care. Health Econ. 2008;17(8):977-995. [DOI] [PubMed] [Google Scholar]
- 26. Bazzoli GJ, Clement JP, Lindrooth RC, et al. Hospital financial condition and operational decisions related to the quality of hospital care. Med Care Res Rev. 2007;64(2):148-168. [DOI] [PubMed] [Google Scholar]
- 27. Jha AK, Orav EJ, Li Z, Epstein AM. The inverse relationship between mortality rates and performance in the hospital quality alliance measures. Health Affairs. 2007;26(4):1104-1110. [DOI] [PubMed] [Google Scholar]
- 28. Kazley AS, Ozcan YA. Do hospitals with electronic medical records (EMRs) provide higher quality care? Med Care Res Rev. 2008;65(4):496-513. [DOI] [PubMed] [Google Scholar]
- 29. Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. Nurse-staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346(22):1715-1722. [DOI] [PubMed] [Google Scholar]
- 30. Institute D. Data by Region. http://www.dartmouthatlas.org/data/region/. Published 2016. Accessed August 2017.
- 31. Menachemi N, Shin DY, Ford EW, Yu F. Environmental factors and health information technology management strategy. Health Care Manage Rev. 2011;36(3):275-285. [DOI] [PubMed] [Google Scholar]
- 32. Hsieh H-M, Clement DG, Bazzoli GJ. Impacts of market and organizational characteristics on hospital efficiency and uncompensated care. Health Care Manage Rev. 2010;35(1):77-87. [DOI] [PubMed] [Google Scholar]
- 33. Yeager VA, Menachemi N, Savage GT, Ginter PM, Sen BP, Beitsch LM. Using resource dependency theory to measure the environment in health care organizational studies: a systematic review of the literature. Health Care Manage Rev. 2014;39(1):50-65. [DOI] [PubMed] [Google Scholar]
- 34. Long JS, Freese J. Regression models for Categorical Dependent Variables Using Stata. College Station, TX: StataCorp LP; 2006. [Google Scholar]
- 35. Cancer Co. Value and Benefits of Accreditation. https://www.facs.org/quality-programs/cancer/coc/apply/benefitscoc. Published 2017. Accessed August 2017.
- 36. Alkhenizan A, Shaw C. Impact of accreditation on the quality of healthcare services: a systematic review of the literature. Ann Saudi Med. 2011;34(4):407-416. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Shaw CD, Groene O, Botje D, et al. The effect of certification and accreditation on quality management in 4 clinical services in 73 European hospitals. Int J Qual Health Care. 2014;26(suppl 1):100-107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Winters BD, Bharmal A, Wilson RF, et al. Validity of the agency for health care research and quality patient safety indicators and the centers for Medicare and Medicaid Hospital-acquired conditions: a systematic review and meta-analysis. Med Care. 2016;54(12):1105-1111. [DOI] [PubMed] [Google Scholar]
- 39. Sarrazin MSV, Rosenthal GE. Finding pure and simple truths with administrative data. JAMA. 2012;307(13):1433-1435. [DOI] [PubMed] [Google Scholar]
- 40. Etzioni DA, Lessow CL, Lucas HD, et al. Infectious surgical complications are not dichotomous: characterizing discordance between administrative data and registry data. Ann Surg. 2018;267:81-87. [DOI] [PubMed] [Google Scholar]
- 41. McGregor JC, Harris AD. The need for advancements in the field of risk adjustment for healthcare-associated infections. Infect Control Hosp Epidemiol. 2014;35(1):8-9. [DOI] [PubMed] [Google Scholar]