Skip to main content
Health Services Research logoLink to Health Services Research
. 2011 Dec;46(6 Pt 1):1946–1962. doi: 10.1111/j.1475-6773.2011.01300.x

The Accuracy of Present-on-Admission Reporting in Administrative Data

L Elizabeth Goldman 1, Philip W Chu 2, Dennis Osmond 3, Andrew Bindman 2
PMCID: PMC3393034  PMID: 22092023

Abstract

Objective

To test the accuracy of reporting present-on-admission (POA) and to assess whether POA reporting accuracy differs by hospital characteristics.

Data Sources

We performed an audit of POA reporting of secondary diagnoses in 1,059 medical records from 48 California hospitals.

Study Design

We used patient discharge data (PDD) to select records with secondary diagnoses that are powerful predictors of mortality and could potentially represent comorbidities or complications among patients who either had a primary procedure of a percutaneous transluminal coronary angioplasty or a primary diagnosis of acute myocardial infarction, community-acquired pneumonia, or congestive heart failure. We modeled the relationship between secondary diagnoses POA reporting accuracy (over-reporting and under-reporting) and hospital characteristics.

Data Collection

We created a gold standard from blind reabstraction of the medical records and compared the accuracy of the PDD against the gold standard.

Principal Findings

The PDD and gold standard agreed on POA reporting in 74.3 percent of records, with 13.7 percent over-reporting and 11.9 percent under-reporting. For-profit hospitals tended to overcode secondary diagnoses as present on admission (odds ratios [OR] 1.96; 95 percent confidence interval [CI] 1.11, 3.44), whereas teaching hospitals tended to undercode secondary diagnoses as present on admission (OR 2.61; 95 percent CI 1.36, 5.03).

Conclusions

POA reporting of secondary diagnoses is moderately accurate but varies by hospitals. Steps should be taken to improve POA reporting accuracy before using POA in hospital assessments tied to payments.

Keywords: Present-on-admission, hospitals, accuracy, administrative data, quality measurement


There is widespread interest in public reporting of hospital performance and quality-based incentives as a means to improve hospital care. Increasingly, states and other stakeholders use administrative data generated for billing purposes to measure hospital quality, even though the accuracy of such data has been questioned (Iezzoni et al. 1988, 1992; McCarthy et al. 2000; Romano, Schembri, and Rainwater 2002; Romano et al. 2002; Romano 2003; Fry et al. 2006). One large concern is that clinical assessments measured from administrative data inadequately account for patient health status (Iezzoni et al. 1996). Secondary diagnoses in administrative data are used in risk adjustment to estimate differences in patient health status. However, under many circumstances risk-adjustment models exclude secondary diagnoses that are important risk factors of poor patient outcomes because the way the information is coded in administrative data makes it difficult to distinguish whether a recorded secondary diagnosis is a comorbidity or a complication of care; it is appropriate to adjust outcomes for comorbidities, but not for complications of care. It is generally safe to assume that chronic conditions are not the result of hospital care and thereby reflect comorbidities. The situation is often less obvious for recorded acute conditions, which in some circumstances may reflect a comorbidity and in others a complication.

Present-on-admission (POA) reporting is an emerging method for distinguishing in administrative data between complications of care that developed during and comorbidities that existed prior to hospitalization (Stukenborg et al. 2005, 2007; Bindman and Bennett 2006; Iezzoni 2007; Bahl, Thompson, Kau, Hu, and Campbell Jr. 2008). In 2008, the Center for Medicare and Medicaid Services (CMS) implemented the requirement that hospitals report POA for each diagnosis in its administrative data as a means to distinguish hospital-acquired conditions from comorbidities. In this approach, secondary diagnoses that exist prior to admission are considered comorbidities, and CMS instructs hospitals to self-report the “POA” reporting of these conditions as “yes” corresponding to their being present at the time of admission. Secondary diagnoses that occur after hospital admission are considered complications, and hospitals are instructed to self-report these conditions in administrative data as “no” corresponding to not present on admission. CMS uses additional categories: “Exempt” for conditions that are exempt from reporting, “Unknown” for those conditions where it was unknown whether it was present on admission, and “Clinically Undetermined” if there was insufficient evidence to determine clinically whether the condition is present on admission. California did not use the exempt, unknown, and clinically undetermined codes at the time of this study. While the coding of conditions using the POA flag has face validity, there are questions about how accurately POA codes are self-reported by hospitals (Hughes et al. 2006; Iezzoni 2007). Hospitals concerned about publicly reported quality assessments based on risk-adjusted models from administrative data could “over-report” diagnoses as present on admission to make their patients appear sicker and thereby improve their publicly reported risk-adjusted mortality rates. Overcoding of diagnoses in for-profit hospitals has previously been recognized in situations where coding practices influence reimbursement (Hsia et al. 1992). Hospitals with fewer administrative resources in their medical record departments due to small size, payer case mix, or lower profit margins may be less able to train coders in POA reporting and therefore may be more likely to misreport POA (either over- or under-report). Teaching hospitals that predominately rely on physician documentation from rotating physicians-in-training may be more likely to misreport POA. Prior research has documented that coding practices for billing purposes vary extensively across hospitals and are influenced by hospital characteristics, physician documentation, and response to payment reform (Goldfarb and Coffey 1992; Hsia et al. 1992; Lorence 2003; Lorence and Ibrahim 2003a,b; Rangachari 2007; Santos et al. 2008; Hennessy et al. 2010). It is unknown whether these influences extend to POA reporting. No recent large studies have assessed the accuracy of the POA indicator variables against an external standard such as chart review.

Methods

We conducted an audit of POA reporting in the 2005 California patient discharge data (PDD) by comparing its accuracy against a gold standard created from blinded reabstraction of the corresponding medical records. California is one of two states, the other New York, that has over a decade of experience in requiring hospitals to use POA reporting in routine reporting of hospital discharges. We used our reabstraction-based gold standard to identify patterns of POA under-reporting and over-reporting, including systematic tendencies by hospital characteristics.

The California PDD includes patient demographic, diagnostic, procedure, and disposition codes for approximately 3.7 million hospitalizations per year from all nonfederal, nonchildren's California acute care hospitals (N = 355). Using a research file that included hospital and patient identifiers, we selected a probability sample of records for review. For efficiency, we used a complex sampling design that randomly selected hospitals in proportion to the number of eligible patient records. Other hospital characteristics were not included in the sampling design.

Eligible patient records were those with International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9 CM) codes corresponding to the procedure percutaneous transluminal coronary angioplasty or one of three principal diagnoses: acute myocardial infarction (Romano and Luft 1996), community-acquired pneumonia (Center 2006), or congestive heart failure (AHRQ 2006; see Appendix 2a). We selected these four “umbrella conditions” because they are common and associated with relatively high mortality. The California Office of Statewide Health Planning and Development (2008) currently produces public reports of hospital quality for two of these conditions: acute myocardial infarction (Romano and Luft 1996) and community-acquired pneumonia (Center 2004, 2006).

In addition, to be an eligible record, the patient had to also have one of two prespecified acute secondary diagnoses that, depending on the clinical circumstances, could be regarded as either a comorbidity or hospital complication of the umbrella condition (see Appendix 2b). These diagnoses were selected following a literature review that indicated their importance as predictors of mortality. We prespecified the acute secondary diagnoses for sampling to insure adequate sample size for individual influential secondary diagnoses, recognizing that POA reporting accuracy could differ by diagnosis. Among cases whose principal “umbrella condition” was acute myocardial infarction, we sampled cases that also had shock or pulmonary edema listed as a secondary diagnosis. Among all cases of acute myocardial infarction, shock occurred in 5.7 percent and pulmonary edema occurred in 8.7 percent of cases statewide. For cases whose umbrella condition was congestive heart failure, we sampled cases with secondary diagnoses of acute myocardial infarction or acute renal failure (Krumholz et al. 1997; Smith et al. 2006). Acute myocardial infarction occurred in 2.19 percent and renal failure in 8.7 percent of cases of congestive heart failure statewide. For community-acquired pneumonia, we sampled cases with secondary diagnoses of septicemia (Iezzoni et al. 1992; Fine et al. 1996) or respiratory failure (Haas et al. 2000). Septicemia occurred in 3.8 percent and respiratory failure in 7.7 percent of community-acquired pneumonia cases. For percutaneous transluminal coronary angioplasty cases, we sampled cases with secondary diagnoses of acute myocardial infarction (Moscucci et al. 2001) or acute renal failure (Best et al. 2002). Acute myocardial infarction occurred in 2.8 percent and acute renal failure in 3.2 percent of percutaneous transluminal coronary angioplasty cases.

To prevent any single hospital from disproportionately influencing our sample, we capped the number of records for each “umbrella condition—secondary diagnosis” combination at 10 cases per hospital. Thus, the maximum number of cases from any hospital was 80, and only one of our 48 sampled hospitals (2.1 percent) reached the 80 record cap.

We used two types of abstractors to review the medical records: health information technicians (HIT) and registered nurses (RN). HITs mirror the type of person routinely employed in hospital medical records departments to code administrative coding, including POA. All HITs employed for this study had Registered Health Information Technician certification and were previously employed as inpatient medical coders for at least 5 years. We separately employed RNs who had at least 5 years of experience in reabstraction of inpatient medical coding to abstract the medical records to gauge whether someone with clinical training would make a different determination about whether a secondary diagnosis was present on admission. All abstractors (five HITs and five RNs) participated in a 40-hour training session led by the study team which included instruction in the data collection tool, standardized training examples, and feedback on sample medical records abstractions. Abstractors were instructed to apply the directions on POA reporting provided to the hospitals by the California Office of Statewide Health Planning and Development. These instructions describe that diagnoses were to be recorded as present on admission when the diagnosis was documented by a physician in the admission note. Chronic diagnoses (e.g., diabetes) identified during the hospitalization were considered present on admission. Conditions suspected at the time of admission (e.g., noted in an emergency room or admitting physician note) were also considered present on admission. Abstractors were to record a condition as not present on admission when there was no physician documentation of the clinical condition in the admission or emergency department note and no signs or symptoms of the condition on admission.

Health information technicians blindly reviewed the medical record following standard practice medical record coding rules. In contrast, the RNs possessed the unblinded list of diagnostic codes that the hospitals submitted to the California PDD. The RNs first determined whether the codes listed for the principal and all secondary diagnoses were correct and only then blindly determined as to whether each diagnosis was present on admission. Of the 1,557 records abstracted by the HITs, 9.4 percent (n = 147) were abstracted twice, and of the 1,688 records reviewed by the RNs 18.6 percent (N = 307) were abstracted twice for the purposes of quality control. Depending on whether a record was reviewed once or twice by each of an HIT and an RN, records could have been reabstracted from two to four times.

Using the multiple abstractions for each record, we created gold standards for POA reporting accuracy for the two specified acute secondary diagnoses associated with each of the four umbrella conditions. As a requirement for a case to be considered in the gold standard sample, more than one reviewer had to review the case, and the reviewers had to agree (with the PDD) on the accuracy of the sampled umbrella condition and secondary diagnosis. Then for records with two or three reabstractions, each had to agree on the POA reporting of the specified secondary diagnosis for the case to qualify as a part of the gold standard sample. For records with four reabstractions, at least three needed to agree on the POA reporting for the case to qualify (i.e., either three of four reabstractions agreed or four of four reabstractions agreed on POA reporting). For the remaining records (with two or more reabstractions) lacking consensus as defined above in the reporting of POA, physicians blindly adjudicated the POA reporting to make the final gold standard determination.

POA can be misreported in two ways. Over-reported secondary diagnoses are those in which the PDD recorded the POA reporting as present on admission, but the gold standard assessment was not present on admission. Under-reported secondary diagnoses were documented in the PDD as not present on admission, but the gold standard assessment was present on admission.

We linked hospital characteristics, including teaching status, ownership, percent profit margin, the number of staffed beds, and percent of discharges reimbursed by Medicaid available in the 2005 California Annual Financial Database, to each sampled case in the California PDD (Iezzoni et al. 1988, 1992; Lorence 2003; Lorence and Ibrahim 2003a,b; Preyra 2004; Goldman et al. 2007; Santos et al. 2008). Teaching hospitals were those participating in the Council of Teaching Hospitals and Health Systems. We categorized hospitals into for-profit and not for-profit hospitals (aggregating government and nonprofit). Percent profit margins, number of staffed beds, and percent of discharges reimbursed by Medicaid were categorized into quartiles based on the distribution of these variables among hospitals in California. Sensitivity analyses using tertiles and quintiles of profit margin, number of staffed beds, and percent of discharges reimbursed by Medicaid yielded similar results as when categorized by quartile, so for the ease of presentation only the results by quartile are shown.

We examined the accuracy of POA reporting in the California PDD as compared to the gold standard overall and for each of the eight combinations of umbrella condition and secondary diagnosis. We tested for bias to over-report or under-report POA using McNemar's test. In stratified analyses, we examined whether the accuracy of POA reporting varied by hospital characteristics, including teaching status, ownership, number of staffed beds, percent profit margin, and percent of discharges reimbursed by Medicaid (Iezzoni et al. 1988, 1992; Lorence 2003; Lorence and Ibrahim 2003a,b; Preyra 2004; Goldman et al. 2007; Santos et al. 2008), and whether the umbrella condition is publicly reported in California (acute myocardial infarction and community-acquired pneumonia). We also tested (using a z-test) the hypothesis of differential POA over-reporting of secondary diagnoses whose umbrella condition was publicly reported. To control for clustering of data within hospitals, we used hierarchical logistic regression (SAS PROC GLIMMIX, Cary, North Carolina, USA) to test for hospital characteristics predictive of “over-reporting” and “under-reporting,” while controlling for patient characteristics, including age, sex, in-hospital death, and the prespecified acute secondary diagnosis. We only included hospital and patient characteristics in the multivariate analyses that were statistically significant at the level of p < .1 in bivariate analyses.

Results

Our initial sample consisted of 1,694 records across 48 hospitals; the HITs abstracted 1,557 of these records, and the RNs abstracted 1,688 (Appendix 1). Of the abstracted records, we excluded 525 records in which the HIT did not code and 162 in which the RN did not confirm either the sampled umbrella condition or the acute secondary diagnosis. A total of 1,059 records met our gold standard criteria, of which 304 (28.7 percent) required physician adjudication.

Our patient sample tended to be sick with an in-hospital mortality of 26.0 percent. Eighty-one percent of the patients were older than 60 and had multiple medical diagnoses (85 percent ≥ 2 comorbidities) (Elixhauser et al. 1998). For the primary diagnosis or procedure (i.e., umbrella condition), 298 patients had an acute myocardial infarction, 205 had community-acquired pneumonia, 288 had congestive heart failure, and 268 had percutaneous transluminal coronary angioplasty (Table 1).

Table 1.

Present-on-Admission (POA) Reporting Accuracy by Patient and Hospital Characteristics: Sample Characteristics

Patient Characteristics Sample Size Over-Reported Under-Reported Agree
Total N = 1,059 146 (13.8%) 126 (11.9%) 787 (74.3%)
Demographics
 Age (years)
  19–49 70 10 (14.3%) 8 (11.4%) 52 (74.3%)
  50–59 134 17 (12.7%) 15 (11.2%) 102 (76.1%)
  60–69 218 36 (16.5%) 28 (12.8%) 154 (70.6%)
  70–79 293 37 (12.6%) 31 (10.6%) 225 (76.8%)
  80–89 274 33 (12.0%) 36 (13.1%) 205 (74.8%)
  90+ 70 13 (18.6%) 8 (11.4%) 49 (70.0%)
 Female 602 85 (14.1%) 66 (11.0%) 451 (74.9%)
 Male 457 61 (13.3%) 60 (13.1%) 336 (73.5%)
 Alive at discharge 784 113 (14.4%) 80 (10.2%) 591 (75.4%)
 Died in hospital 275 33 (12.0%) 46 (16.7%) 196 (71.3%)
Clinical condition
 Acute myocardial infarction
  Shock 157 16 (10.2%) 21(13.4%) 120 (76.4%)
  Pulmonary edema 141 8 (5.7%) 18(12.8%) 115 (81.6%)
 Community-acquired pneumonia
  Septicemia 88 30 (34.1%) 5 (5.7%) 53 (60.2%)
  Respiratory failure 117 11 (9.4%) 14 (12.0%) 92 (78.6%)
 Congestive heart failure
  Acute myocardial infarction 137 127 (92.7%) 26 (19.0%) 101 (73.7%)
  Acute renal failure 151 137 (90.7%) 18 (11.9%) 119 (78.8%)
 PTCA
  Acute myocardial infarction 136 109 (80.1%) 9 (6.6%) 100 (73.5%)
  Acute renal failure 132 102 (77.3%) 15 (11.4%) 87 (65.9%)
Hospital characteristics where hospitalized
Nonteaching 875 124 (14.2%) 92 (10.5%) 659 (75.3%)
Teaching 184 22 (12.0%) 34 (18.5%) 128 (69.6%)
Not for-profit 836 104 (12.4%) 107 (12.8%) 625 (74.8%)
For-profit 223 42 (18.8%) 19 (8.5%) 162 (72.6%)
Profit margin
 Lowest 73 8 (11.0%) 4 (5.5%) 61 (83.6%)
 Second 203 30 (14.8%) 15 (7.4%) 158 (77.8%)
 Third 373 53 (14.2%) 53 (14.2%) 267 (71.6%)
 Highest 410 55 (13.4%) 54 (13.2%) 301 (73.4%)
No. of Staffed beds
 Fewest 19 2 (10.5%) 2 (10.5%) 15 (78.9%)
 Second 83 15 (18.1%) 3 (3.6%) 65 (78.3%)
 Third 326 33 (10.1%) 38 (11.7%) 255 (78.2%)
 Most 631 96 (15.2%) 83 (13.2%) 452 (71.6%)
% Medicaid
 Lowest 281 36 (12.8%) 39 (13.9%) 206 (73.3%)
 Second 432 56 (13.0%) 51 (11.8%) 325 (75.2%)
 Third 246 39 (15.9%) 27 (11.0%) 180 (73.2%)
 Highest 100 15 (15.0%) 9 (9.0%) 76 (76.0%)

Notes. Profit margin into quartiles: lowest = <−0.076, second = 0.076–0.0006, third = 0.0006–0.072, highest = >0.072.

Number of staffed beds quartiles: fewest = <60 beds, second = 60–136 beds, third = 137–241 beds, most = >240 beds.

Medicaid quartiles: lowest = <6.9% of discharges reimbursed by Medicaid, second = 6.9%–17.2%, third = 17.2%–34.4%, highest = >34.4% of the discharges reimbursed by Medicaid.

PTCA, percutaneous transluminal coronary angioplasty.

The 48 sampled hospitals included 9 (18.8 percent) for-profit and 39 (81 percent) not for-profit institutions. Eight of the hospitals (16.7 percent) were teaching facilities and 40 were not (83.3 percent). The average number of staffed beds was 284, ranging from 24 to 855. On average, 19 percent of hospital admissions were reimbursed by Medicaid (range 0.5–63 percent), and the average profit margin was 2 percent (range 22–18 percent).

Overall, we found 74.3 percent agreement in POA reporting of secondary diagnoses between the gold standard and the PDD without any tendency to over- or under-report POA (McNemar's, p-value = .25). Reporting accuracy ranged from only 60.2 percent for septicemia in the setting of community-acquired pneumonia to 81.6 percent for pulmonary edema in the setting of acute myocardial infarction. There were no substantial differences in over- or under-reporting of POA for secondary diagnoses whose umbrella conditions were publicly reported in risk-adjusted mortality reports (78.9 percent agreement for patients with acute myocardial infarction and 70.7 percent in community-acquired pneumonia) compared to those umbrella conditions without public reports (76.4 percent in congestive heart failure and 69.8 percent in percutaneous transluminal coronary angioplasty) (p = .42).

Present-on-admission accuracy was highly variable across hospitals. The percent agreement for the eight secondary conditions ranged from 1 to 100, in part due to small number of cases at some hospitals. Certain hospital characteristics predicted POA reporting accuracy (Table 2). Adjusted for patient-level characteristics, for-profit hospitals were more likely to over-report secondary diagnoses as being present on admission (odds ratios [OR] 1.96; 95 percent confidence interval [CI] 1.11, 3.44). In contrast, POA was more likely to be under-reported at teaching hospitals (OR 2.61; 95 percent CI 1.36, 5.03). Neither percent profit margin, number of staffed beds, nor percent of discharges reimbursed by Medicaid were independently associated with POA reporting accuracy (p > .05).

Table 2.

Present-on-Admission Hospital Characteristics and Reporting Accuracy by Hospital Characteristics, Univariate and Multivariate Models

OR (95% CI) OR (95% CI) OR (95% CI)



Over-Reporting Under-Reporting Agreement



Characteristics Univariate Multivariate Univariate Multivariate Univariate Multivariate
Teaching 0.82 (0.51, 1.34) 0.83 (0.42, 1.63) 1.93 (1.25, 2.97) 2.61 (1.36, 5.03) 0.75 (0.53, 1.06) 0.65 (0.41, 1.03)
For-profit 1.63 (1.10, 2.42) 1.96 (1.11, 3.44) 0.64 (0.38, 1.06) 1.0 (0.51, 2.00) 0.9 (0.64, 1.25) 1.01 (0.51, 2.00)
Profit margin
 Lowest (ref)
 Second 1.41 (0.61, 3.23) 2.27 (0.80, 6.43) 1.38 (0.44, 4.29) 0.94 (0.25, 3.58) 0.69 (0.34, 1.39) 0.62 (0.28, 1.40)
 Third 1.35 (0.61, 2.96) 1.99 (0.74, 5.32) 2.86 (1.00, 8.15) 2.38 (0.68, 8.32) 0.50 (0.26, 0.96) 0.41 (0.19, 0.89)
 Highest 1.26 (0.57, 2.77) 2.05 (0.77, 5.48) 2.62 (0.92, 7.46) 2.35 (0.69, 8.02) 0.54 (0.28, 1.05) 0.41 (0.19, 0.87)
No. of staffed beds
 Fewest (ref)
 Second 1.88 (0.39, 9.00) 0.32 (0.05, 2.06) 0.96 (0.28, 3.26)
 Third 0.96 (0.21, 4.33) 1.12 (0.25, 5.04) 0.96 (0.31, 2.98)
 Most 1.53 (0.35, 6.71) 1.29 (0.29, 5.67) 0.67 (0.22, 2.06)
% Medicaid
 Lowest (ref)
 Second 1.01 (0.65, 1.59) 0.83 (0.53, 1.30) 1.11 (0.79, 1.56)
 Third 1.28 (0.79, 2.09) 0.77 (0.45, 1.29) 0.99 (0.68, 1.46)
 Highest 1.2 (0.63, 2.31) 0.61 (0.29, 1.32) 1.15 (0.68, 1.96)

Notes. Profit margin into quartiles: lowest = <−0.076, second = 0.076–0.0006, third = 0.0006–0.072, highest = >0.072.

Number of staffed beds quartiles: fewest = <60 beds, second = 60–136 beds, third = 137–241 beds, most = >240 beds.

Medicaid quartiles: lowest = <6.9% of discharges reimbursed by Medicaid, second = 6.9–17.2%, third = 17.2–34.4%, highest = >34.4% of the discharges reimbursed by Medicaid.

Discussion

Our study is the largest to date to evaluate the accuracy of POA reporting for acute medical conditions that could be either comorbidities or complications (Pine et al. 2009). Consistent with a smaller study of POA reporting in California among cases of community-acquired pneumonia (Haas et al. 2000) and a three-hospital study in Canada (Quan, Parsons, and Ghali 2004), we found variability in the accuracy of reporting POA across secondary diagnoses. In general we did not find a tendency toward under- or over-reporting of POA. However, for-profit hospitals were more likely to over-report the secondary diagnoses as being present on admission when they were not, whereas teaching hospitals were more likely to under-report secondary diagnoses as being not present on admission when they were.

Our findings are consistent with previous studies that have found a greater number of billed diagnoses in for-profit hospitals (Steinbusch et al. 2007). This may suggest a general tendency of hospital medical record coders at these institutions to code medical records more aggressively to the point that they are overcoded. However, in the case of POA reporting it is less clear than in the case of billing codes to understand why for-profit hospitals would have a motivation to overcode. In 2005, the year of our sampled data, hospitals were not subject to any direct financial penalties related to POA reporting. At this time, there was public reporting by the state of the risk-adjusted outcomes of acute myocardial infarction and community-acquired pneumonia, but POA reporting was only part of California's risk-adjustment model for community-acquired pneumonia (Romano and Luft 1996; Haas et al. 2000). This suggests that over-reporting may reflect a general approach to coding by these hospitals that is not always tied to financial gain. Our finding that teaching hospitals under-report POA is more likely related to differences in physician documentation, rather than differences in HIT reporting. At teaching hospitals, physician trainees are responsible for the majority of documentation. Commonly, they receive relatively little training in billing and have minimal personal incentives to document optimally from a billing perspective (Mookherjee et al. 2010; Stephens and Williams 2010).

While it was not the case during the time period of our study, POA reporting in administrative data is now being applied in hospital payment decisions. In 2008, the CMS implemented a Medicare policy to not reimburse hospitals for certain hospital-acquired conditions that are identified in part using POA reporting. Several private insurers are adopting the same policy (Becker 2008; Miller 2010), and as a part of the Patient Protection and Affordable Care Act, CMS will also apply the policy to Medicaid hospitalizations (Patient Protection and Affordable Care Act 2009). While the fiscal impact of CMS's policy on Medicare reimbursement is not as dramatic as had been anticipated (McNair, Luft, and Bindman 2009; Meddings, Saint, and McMahon Jr. 2010), the potential for fiscal consequences associated with POA reporting accuracy substantially increases as more payers participate and use performance-based reimbursement. The ability to improve hospital quality and accountability in the era of health care reform hinges on having accurate hospital assessments. Our study suggests that accuracy of POA reporting varies by hospital characteristics, and in the setting of increasing care accountability, teaching hospitals could mistakenly be assessed as having worse performance than they actually have and perhaps suffer negative financial consequences as a result. Similarly, for-profit hospitals would present themselves as having better performance than they actually delivered. The linking of Medicare's payment policy to POA reporting may influence the accuracy of this variable over time.

Our study focused on several common important clinical conditions used in public reports of hospital assessments and developed a gold standard against which to compare hospitals’ reporting of POA using multiple abstractors with physician adjudication of disagreements. We purposefully sampled potentially high impact and conditions potentially challenging to distinguish between whether they were present on admission. The generalizability of our findings beyond the selected clinical conditions may be limited. Our probability sample was weighted toward larger hospitals, urban institutions, and severely ill patients and the accuracy of POA reporting may be different among patients who are less sick and cared for in smaller, rural hospitals. Our results focus on POA reporting in the setting of agreement in the reporting of the underlying umbrella and secondary diagnoses. However, POA reporting accuracy is only one aspect of the “overall” reporting accuracy for diagnoses in administrative data. The overall reporting accuracy for a diagnosis would be even worse if one takes the inaccuracies of coding the umbrella conditions, acute secondary risk factors, and POA into consideration.

Our finding that POA reporting is moderately accurate suggests that it could be improved to become more useful as a tool to discriminate between comorbidities and complications. As our study was completed, subsequent efforts to address POA reporting errors have been introduced and could change POA reporting accuracy and therefore the utility of this data element. The National Center for Health Statistics has published official guidelines for coders on how to report POA status not available in 2005. In addition, the response categories for this variable have been changed to give coders more latitude to report POA status. Finally, a variety of edits have been developed to flag suspicious POA data (Hughes et al. 2006; Pine et al. 2009), which may substantially improve the accuracy of POA data that are actually used for risk adjustment. In addition to the changes recommended by the National Center for Health Statistics, some institutions are having HITs confirm POA reporting with physicians caring for the patients as a strategy to improve the accuracy of this variable (Garrett 2009). How this practice impacts POA accuracy has not been studied. Another potentially useful strategy would be to add other data elements to administrative data that may make it easier to determine whether an acute condition is present on admission. In 2011, California will add select laboratory values and vital signs to its administrative data (Bindman and Luft 2006). This expansion may provide an opportunity to assess whether POA reporting accuracy could be improved with confirmatory clinical information, and in combination with these clinical variables afford a more robust distinction between comorbidities and complications. Ultimately we need improved methods to make unbiased assessments of patient health status that are not susceptible to gaming. POA reporting is a promising approach, but it needs further refinement if it is to serve as the basis for allocating payments.

Acknowledgments

Joint Acknowledgment/Disclosure Statement: This research is supported by an Agency for Healthcare Research and Quality, K08 Mentored Clinical Scientist Development Award, grant 1 K08 HS018090-01, and a NIH/NCRR/OD UCSF-CTSI grant KL2 RR024130. We would like to acknowledge Dr. Joseph Parker and the Office of Statewide Health Planning and Development for their methodological input and facilitation of data acquisition for this study. We also thank Huong Tran for providing technical and administrative support in preparing the revised manuscript.

Disclosures: None.

Disclaimers: None.

SUPPORTING INFORMATION

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

hesr0046-1946-SD1.doc (80.5KB, doc)

Appendix 1: Sampling Schema.

hesr0046-1946-SD2.doc (85.5KB, doc)

Appendix 2a: Umbrella Condition Included in the Analysis.

hesr0046-1946-SD3.doc (43.5KB, doc)

Appendix 2b: Selected Risk Factors Included in the Analysis.

hesr0046-1946-SD4.doc (45.5KB, doc)

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

References

  1. Agency for Healthcare Research and Quality. 2006. “Inpatient Quality Indicators Overview. AHRQ Quality Indicators”. [accessed on March 4, 2006]. Available at http://www.qualityindicators.ahrq.gov/modules/iqi_overview.aspx. [Google Scholar]
  2. Bahl V, Thompson MA, Kau TY, Hu HM, Campbell DA., Jr “Do the AHRQ Patient Safety Indicators Flag Conditions That Are Present at the Time of Hospital Admission?”. Medical Care. 2008;46(5):516–22. doi: 10.1097/MLR.0b013e31815f537f. [DOI] [PubMed] [Google Scholar]
  3. Becker C. “WellPoint Joins ‘Never’ Crusade. Nation's Biggest Insurer Won't Pay for Some Preventable Errors Starting This Year.”. Modern Healthcare. 2008;38(14):12. [PubMed] [Google Scholar]
  4. Best PJ, Lennon R, Ting HH, Bell MR, Rihal CS, Holmes DR, Berger PB. “The Impact of Renal Insufficiency on Clinical Outcomes in Patients Undergoing Percutaneous Coronary Interventions.”. Journal of the American College of Cardiology. 2002;39(7):1113–9. doi: 10.1016/s0735-1097(02)01745-x. [DOI] [PubMed] [Google Scholar]
  5. Bindman AB, Bennett A. “Date Stamping: Will It Withstand the Test of Time?”. Health Services Research. 2006;41(4 Pt 1):1438–43. doi: 10.1111/j.1475-6773.2006.00555.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bindman AB, Luft HS. Expanding Patient-Level Administrative Data for Quality Assessment. Sacramento, CA: California Office of Statewide Health Planning and Development; 2006. [Google Scholar]
  7. Center HO. Report on Hospital Outcomes for Community-Acquired Pneumonia in California, 1999-2001. Sacramento, CA: Healthcare Quality and Analysis Division, California Office of Statewide Health Planning and Development; 2004. [Google Scholar]
  8. Center HO. Community-Acquired Pneumonia: Hospital Outcomes in California, 2002-2004. Sacramento, CA: Healthcare Information Division. California Office of Statewide Health Planning and Development; 2006. [Google Scholar]
  9. Elixhauser A, Steiner C, Harris DR, Coffey RM. “Comorbidity Measures for Use with Administrative Data.”. Medical Care. 1998;36(1):8–27. doi: 10.1097/00005650-199801000-00004. [DOI] [PubMed] [Google Scholar]
  10. Fine MJ, Smith MA, Carson CA, Mutha SS, Sankey SS, Weissfeld LA, Kapoor WN. “Prognosis and Outcomes of Patients with Community-Acquired Pneumonia. A Meta-Analysis.”. Journal of the American Medical Association. 1996;275(2):134–41. [PubMed] [Google Scholar]
  11. Fry DE, Pine MB, Jordan HS, Hoaglin DC, Jones B, Meimban R. “The Hazards of Using Administrative Data to Measure Surgical Quality.”. American Surgeon. 2006;72(11):1031–7. discussion 61–9, 133–48. [PubMed] [Google Scholar]
  12. Garrett G. “Present on Admission, Where We Are Now.”. Journal of AHIMA. 2009;80(7):22–6. [PubMed] [Google Scholar]
  13. Goldfarb MG, Coffey RM. “Change in the Medicare Case-Mix Index in the 1980s and the Effect of the Prospective Payment System.”. Health Services Research. 1992;27(3):385–415. [PMC free article] [PubMed] [Google Scholar]
  14. Goldman LE, Henderson S, Dohan DP, Talavera JA, Dudley RA. “Public Reporting and Pay-for-Performance: Safety-Net Hospital Executives’ Concerns and Policy Suggestions.”. Inquiry. 2007;44(2):137–45. doi: 10.5034/inquiryjrnl_44.2.137. [DOI] [PubMed] [Google Scholar]
  15. Haas J, Luft H, Romano P, Dean M, Hung Y, Bacchetti P. Report for the California Hospital Outcomes Project Community-Acquired Pneumonia, 1996; Model Development and Validation. Sacramento, CA: Office of Statewide Health Planning and Development; 2000. [Google Scholar]
  16. Hennessy DA, Quan H, Faris PD, Beck CA. “Do Coder Characteristics Influence Validity of ICD-10 Hospital Discharge Data?”. BMC Health Services Research. 2010;10:99. doi: 10.1186/1472-6963-10-99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Hsia DC, Ahern CA, Ritchie BP, Moscoe LM, Krushat WM. “Medicare Reimbursement Accuracy under the Prospective Payment System, 1985 to 1988.”. Journal of the American Medical Association. 1992;268(7):896–9. [PubMed] [Google Scholar]
  18. Hughes JS, Averill RF, Goldfield NI, Gay JC, Muldoon J, McCullough E, Xiang J. “Identifying Potentially Preventable Complications Using a Present on Admission Indicator.”. Health Care Financing Review. 2006;27(3):63–82. [PMC free article] [PubMed] [Google Scholar]
  19. Iezzoni LI. “Finally Present on Admission But Needs Attention.”. Medical Care. 2007;45(4):280–2. doi: 10.1097/01.mlr.0000259078.54902.fe. [DOI] [PubMed] [Google Scholar]
  20. Iezzoni LI, Burnside S, Sickles L, Moskowitz MA, Sawitz E, Levine PA. “Coding of Acute Myocardial Infarction. Clinical and Policy Implications.”. Annals of Internal Medicine. 1988;109(9):745–51. doi: 10.7326/0003-4819-109-9-745. [DOI] [PubMed] [Google Scholar]
  21. Iezzoni LI, Foley SM, Daley J, Hughes J, Fisher ES, Heeren T. “Comorbidities, Complications, and Coding Bias. Does the Number of Diagnosis Codes Matter in Predicting in-Hospital Mortality?”. Journal of the American Medical Association. 1992;267(16):2197–203. doi: 10.1001/jama.267.16.2197. [DOI] [PubMed] [Google Scholar]
  22. Iezzoni LI, Ash AS, Shwartz M, Daley J, Hughes JS, Mackiernan YD. “Judging Hospitals by Severity-Adjusted Mortality Rates: The Influence of the Severity-Adjustment Method.”. American Journal of Public Health. 1996;86(10):1379–87. doi: 10.2105/ajph.86.10.1379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Krumholz HM, Wang Y, Parent EM, Mockalis J, Petrillo M, Radford MJ. “Quality of Care for Elderly Patients Hospitalized with Heart Failure.”. Archives of Internal Medicine. 1997;157(19):2242–7. [PubMed] [Google Scholar]
  24. Lorence D. “Regional Variation in Medical Classification Agreement: Benchmarking the Coding Gap.”. Journal of Medical Systems. 2003;27(5):435–43. doi: 10.1023/a:1025607805588. [DOI] [PubMed] [Google Scholar]
  25. Lorence DP, Ibrahim IA. “Disparity in Coding Concordance: Do Physicians and Coders Agree?”. Journal of Health Care Finance. 2003a;29(4):43–53. [PubMed] [Google Scholar]
  26. Lorence DP, Ibrahim IA. “Benchmarking Variation in Coding Accuracy across the United States.”. Journal of Health Care Finance. 2003b;29(4):29–42. [PubMed] [Google Scholar]
  27. McCarthy EP, Iezzoni LI, Davis RB, Palmer RH, Cahalane M, Hamel MB, Mukamal K, Phillips RS, Davies DT., Jr “Does Clinical Evidence Support ICD-9-CM Diagnosis Coding of Complications?”. Medical Care. 2000;38(8):868–76. doi: 10.1097/00005650-200008000-00010. [DOI] [PubMed] [Google Scholar]
  28. McNair PD, Luft HS, Bindman AB. “Medicare's Policy Not to Pay for Treating Hospital-Acquired Conditions: The Impact.”. Health Affairs (Millwood) 2009;28(5):1485–93. doi: 10.1377/hlthaff.28.5.1485. [DOI] [PubMed] [Google Scholar]
  29. Meddings J, Saint S, McMahon LF., Jr “Hospital-Acquired Catheter-Associated Urinary Tract Infection: Documentation and Coding Issues May Reduce Financial Impact of Medicare's New Payment Policy.”. Infection Control and Hospital Epidemiology. 2010;31(6):627–33. doi: 10.1086/652523. [DOI] [PubMed] [Google Scholar]
  30. Miller K. 2010. “Blue Cross and Blue Shield Announces System-Wide Payment Policy for ‘Never Events”. ” [accessed March 22, 2010]. Available at http://www.bcbs.com/news/bcbsa/bcbs-announces-system-wide-payment-policy-for-never-events.html. [Google Scholar]
  31. Mookherjee S, Vidyarthi AR, Ranji SR, Maselli J, Wachter RM, Baron RB. “Potential Unintended Consequences Due to Medicare's No Pay for Errors Rule? A Randomized Controlled Trial of an Educational Intervention with Internal Medicine Residents.”. Journal of General Internal Medicine. 2010;25(10):1097–101. doi: 10.1007/s11606-010-1395-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Moscucci M, Kline-Rogers E, Share D, O'Donnell M, Maxwell-Eward A, Meengs WL, Kraft P, DeFranco AC, Chambers JL, Patel K, McGinnity JG, Eagle KA. “Simple Bedside Additive Tool for Prediction of in-Hospital Mortality after Percutaneous Coronary Interventions.”. Circulation. 2001;104(3):263–8. doi: 10.1161/01.cir.104.3.263. [DOI] [PubMed] [Google Scholar]
  33. Office of Statewide Health Planning and Development. 2008. “Community-Acquired Pneumonia: Hospital Outcomes in California, 2003-2005”. [accessed on July 18, 2011]. Available at http://oshpd.ca.gov/HID/Products/PatDischargeData/ResearchReports/OutcomeRpts/CAP/ [Google Scholar]
  34. Patient Protection and Affordable Care Act. 2009. “H.R. 3590-906”. [accessed on July 18, 2011]. Available at http://democrats.senate.gov/pdfs/reform/patient-protection-affordable-care-act-as-passed.pdf. [Google Scholar]
  35. Pine M, Fry DE, Jones B, Meimban R. “Screening Algorithms to Assess the Accuracy of Present-on-Admission Coding.”. Perspectives in Health Information Management. 2009;6:2. [PMC free article] [PubMed] [Google Scholar]
  36. Preyra C. “Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses.”. Health Services Research. 2004;39(4 Pt 1):1027–45. doi: 10.1111/j.1475-6773.2004.00270.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Quan H, Parsons GA, Ghali WA. “Assessing Accuracy of Diagnosis-Type Indicators for Flagging Complications in Administrative Data.”. Journal of Clinical Epidemiology. 2004;57(4):366–72. doi: 10.1016/j.jclinepi.2003.01.002. [DOI] [PubMed] [Google Scholar]
  38. Rangachari P. “Coding for Quality Measurement: The Relationship between Hospital Structural Characteristics and Coding Accuracy from the Perspective of Quality Measurement.”. Perspectives in Health Information Management. 2007;4:3. [PMC free article] [PubMed] [Google Scholar]
  39. Romano PS. “Asking Too Much of Administrative Data?”. Journal of the American College of Surgeons. 2003;196(2):337–8. doi: 10.1016/S1072-7515(02)01761-1. author reply 38–9. [DOI] [PubMed] [Google Scholar]
  40. Romano PS, Luft H. Report on Heart Attack Outcomes in California 1994-1996, Volume 3: Detailed Statistical Results. Sacramento, CA: California Office of Statewide Health Planning and Development; 1996. [Google Scholar]
  41. Romano PS, Schembri ME, Rainwater JA. “Can Administrative Data Be Used to Ascertain Clinically Significant Postoperative Complications?”. American Journal of Medical Quality. 2002;17(4):145–54. doi: 10.1177/106286060201700404. [DOI] [PubMed] [Google Scholar]
  42. Romano PS, Chan BK, Schembri ME, Rainwater JA. “Can Administrative Data Be Used to Compare Postoperative Complication Rates across Hospitals?”. Medical Care. 2002;40(10):856–67. doi: 10.1097/00005650-200210000-00004. [DOI] [PubMed] [Google Scholar]
  43. Santos S, Murphy G, Baxter K, Robinson KM. “Organisational Factors Affecting the Quality of Hospital Clinical Coding.”. HIM Journal. 2008;37(1):25–37. doi: 10.1177/183335830803700103. [DOI] [PubMed] [Google Scholar]
  44. Smith GL, Lichtman JH, Bracken MB, Shlipak MG, Phillips CO, DiCapua P, Krumholz HM. “Renal Impairment and Outcomes in Heart Failure: Systematic Review and Meta-Analysis.”. Journal of the American College of Cardiology. 2006;47(10):1987–96. doi: 10.1016/j.jacc.2005.11.084. [DOI] [PubMed] [Google Scholar]
  45. Steinbusch PJ, Oostenbrink JB, Zuurbier JJ, Schaepkens FJ. “The Risk of Upcoding in Casemix Systems: A Comparative Study.”. Health Policy. 2007;81(2–3):289–99. doi: 10.1016/j.healthpol.2006.06.002. [DOI] [PubMed] [Google Scholar]
  46. Stephens MB, Williams PM. “Teaching Principles of Practice Management and Electronic Medical Record Clinical Documentation to Third-Year Medical Students.”. Journal of Medical Practice Management. 2010;25(4):222–5. [PubMed] [Google Scholar]
  47. Stukenborg GJ, Kilbridge KL, Wagner DP, Harrell FE, Jr, Oliver MN, Lyman JA, Einbinder JS, Connors AF., Jr “Present-at-Admission Diagnoses Improve Mortality Risk Adjustment and Allow More Accurate Assessment of the Relationship between Volume of Lung Cancer Operations and Mortality Risk.”. Surgery. 2005;138(3):498–507. doi: 10.1016/j.surg.2005.04.004. [DOI] [PubMed] [Google Scholar]
  48. Stukenborg GJ, Wagner DP, Harrell FE, Jr, Oliver MN, Heim SW, Price AL, Han CK, Wolf AM, Connors AF., Jr “Present-at-Admission Diagnoses Improved Mortality Risk Adjustment among Acute Myocardial Infarction Patients.”. Journal of Clinical Epidemiology. 2007;60(2):142–54. doi: 10.1016/j.jclinepi.2006.05.014. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

hesr0046-1946-SD1.doc (80.5KB, doc)
hesr0046-1946-SD2.doc (85.5KB, doc)
hesr0046-1946-SD3.doc (43.5KB, doc)
hesr0046-1946-SD4.doc (45.5KB, doc)

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES