Skip to main content
Health Services Research logoLink to Health Services Research
. 2007 Feb;42(1 Pt 1):45–62. doi: 10.1111/j.1475-6773.2006.00633.x

Assigning Ambulatory Patients and Their Physicians to Hospitals: A Method for Obtaining Population-Based Provider Performance Measurements

Julie P W Bynum, Enrique Bernal-Delgado, Daniel Gottlieb, Elliott Fisher
PMCID: PMC1955742  PMID: 17355581

Abstract

Objective

To develop a method for assigning Medicare enrollees and the physicians who serve them to individual hospitals with adequate validity to allow population-based assessments of provider specific costs and quality of care.

Data Sources/Study Setting

The study populations consist of a 20 percent sample of Medicare fee-for-service enrollees and all physicians submitting claims for Medicare services from 1998 to 2000. Data were obtained from Medicare claims and enrollment files, Medicare's MPIER file, and from the American Hospital Association Annual Survey.

Study Design

Cross-sectional analysis of the characteristics of hospitals, their extended medical staffs (EMSs) and the utilization patterns of their assigned Medicare enrollees.

Data Collection Methods

Medicare enrollees were assigned to their predominant ambulatory physician and then to the hospital where that physician provided inpatient services or where a plurality of that physician's patient panel had medical admissions. Each beneficiary was linked to a physician and a hospital regardless of whether the patient was hospitalized creating Ambulatory Provider Specific Cohorts (APSCs).

Principal Findings

Ninety-six percent of eligible Medicare enrollees who had an index physician visit in 1998 were assigned to a specific provider. Two-thirds of the medical admissions during a 2-year period occurred at the assigned hospital and two-thirds of evaluation and management services were billed by the assigned hospital's EMS. The empirically derived EMS for hospitals had reasonable face and discriminant validity in terms of number and type of physicians practicing at different sized and type hospitals. Estimates of risk-adjusted costs across physician groups in year one are highly predictive of costs in a subsequent year (r =0.87, p <.0001 and weighted κ =0.65, p <.0001).

Conclusions

Medicare claims data can be used to assign virtually all Medicare enrollees to empirically defined care systems comprised of hospitals and the physicians who practice at these hospitals. Studies of patterns of practice, costs and outcomes of care experienced by these APSCs could complement other methods of monitoring provider performance.

Keywords: Performance measures, hospital, small area analysis


Concern about the uneven quality and rising costs of health care has led to growing interest in the development and adoption of performance measures for health care providers. The underlying theory is that performance measurement can foster improvement not only by helping providers identify opportunities to improve their care, but also by allowing consumers, purchasers, or regulators to select or otherwise reward higher performing providers (Berwick, James, and Coye 2003). Private sector efforts to link provider performance to payment have been underway for several years, and the Centers for Medicare and Medicaid Services has recently initiated several public reporting and pay for performance demonstration projects (Rosenthal et al. 2004).

Current performance measurement initiatives, however, have focused almost entirely upon a narrow set of quality indicators that reflect the provision of specific evidence-based treatments, such as proper testing for diabetics (Rosenthal et al. 2004) or the timely inpatient administration of antibiotics (U.S. Department of Health and Human Services 2005). Much less attention has been devoted to the development of provider-specific population-based measures of rates of surgical procedures or overall health care utilization rates and costs. Such measures, widely used in traditional small-area analysis, have provided powerful insights at the regional and community level into potential overuse of surgical procedures (Harris and Lohr 2002), racial disparities in treatment (Skinner et al. 2003; Baicker et al. 2004), and the health implications of the dramatic differences in spending observed across U.S. regions (Fisher et al. 2003a, b). The major limitation of traditional small area analysis has been the difficulty of precisely defining the provider groups who are responsible for the observed patterns of care, either because multiple physician groups and hospitals are located within a given hospital service area or because a high proportion of patients seek care from providers outside the local market (Roos 1993; Dartmouth Medical School 1998).

Recent efforts to apply the basic concepts of small area analysis to provider specific performance measurement have focused on cohorts of hospitalized patients (Fisher et al. 1994, 2003b, 2004; Wennberg et al. 2004). Limitations of these approaches include concerns about the generalizability of rates of elective surgery measured in seriously ill populations, and about the possibility that with increased outpatient management regional variation in hospitalization rates could introduce bias into performance measures. To address these concerns, we have created Ambulatory Provider Specific Cohorts (APSC) who receive care from specific hospitals and their affiliated medical staff by assigning Medicare enrollees to their predominant ambulatory care physician and, through their physicians, to the hospital where they would likely be admitted, should the need arise.

Conceptually, one may ask why the hospital and its formally or informally affiliated medical staff should be considered as a single unit for performance measurement. We believe there are at least four major reasons to consider the providers at a hospital as at least one among a number of potential levels of the “system” that should be considered for a population-based longitudinal evaluation. First hospital services account for the largest single component of health care spending. Second, any effort to reduce overall hospital use will entail engaging hospitals and their extended medical staffs (EMS) in aligning resource inputs (including hospital and ICU beds) with the size and needs of the specific population they serve. Third, marked geographic variations in rates of surgery have called into question the quality of decision making for many major procedures; population-based rates can be used to compare and monitor utilization across specific providers (Weinstein et al. 2004). Finally, efforts to address the current well-recognized deficiencies in the quality and efficiency of care will need to ensure that physicians are able to take advantage of advances in informatics, quality improvement, and care coordination that may be beyond the reach of most physicians because they are in individual or small group practices. The hospital could play an important role by facilitating the “virtual” integration of its affiliated staff through shared electronic records and care protocols.

In this paper, we describe the assignment method and present findings that support the validity of the enrollee and physician assignments and test the hypothesis that estimates of risk-adjusted costs across provider groups in one year will be highly predictive of costs in subsequent years.

METHODS

Cohort Development

Data Source

The data sources for the current study were Medicare enrollment and claims data for 1998–2000, the Medicare Physician Identification and Eligibility Registry (MPIER) file, and the American Hospital Association Annual Survey. We used 100 percent MedPAR, Inpatient and Outpatient files, and a 20 percent random sample (based upon enrollees) of the Carrier File. Appendix A describes the files and how they were used in this study.

Inclusion and Exclusion Criteria

Patients were eligible if they had an index visit (clinic, nursing home, or home visit) in 1998 and were at least 65 years old at mid-year. Patients were excluded if enrolled in an HMO or not Part B eligible at any time in 1998–2000.

Providers who bill Medicare have uniquely assigned UPIN numbers that are coded on claims. We included providers who are physicians (MD or DO) with a valid UPIN, an identifiable specialty, and any billing in the Carrier File in 1998–2000.

Acute care hospitals were included if they were located in the United States (i.e., excluding U.S. territories), were in the AHA file, and did not close during the study period. Hospitals with an AHA designation as a specialty hospital (e.g., ob/gyn or children's hospitals) were not eligible to have patients assigned because they are unlikely to be longitudinal and comprehensive sources of care, but they were included in the analysis of hospital utilization. We did, however, assign physicians to these hospitals. Exclusions were made after patient and physician assignments. Any patients or physicians who were assigned to an excluded hospital were excluded from further analysis (1.6 percent of patients, 2.4 percent of physicians, 4 percent of hospitals).

Overview of Assignment Methods

Each beneficiary was linked to his/her predominant ambulatory physician. Each physician was linked to the specific hospital where they did most of their work or where their patient panel was admitted. Each patient was then assigned to his/her physician's hospital. The result was that each beneficiary is linked to a physician and a hospital regardless of whether the patient required hospitalization during the study period. This linkage identifies a population-based cohort of Medicare enrollees served by a hospital and its affiliated physicians, which we refer to as APSCs.

Development of Physician–Patient Panels

Patients were assigned to their predominant physician provider of ambulatory care. Our goal in this process was to identify a longitudinal physician who was most likely to be affecting decisions. The predominant provider was defined as the generalist (internist, geriatrician, family, or general practitioner) or medical subspecialist with whom the patient had the most ambulatory visits during the 2 years after an index visit to any provider in 1998. If there were no generalist or medical specialists involved in the care, assignment was allowed to other physician types (e.g., dermatologists or surgeons) except for pathologists, radiologists, or psychiatrists. If the number of visits to physicians of equal priority was tied, the physician with the greatest time span between the first and last visits was chosen to favor longitudinal patient–physician relationships, and if only one visit each then the most recent was chosen.

Development of Hospital EMSs

Physicians were empirically assigned to the hospital at which they worked based upon the number of patients for whom they submitted Part B claims for visits or procedures during an inpatient stay and the number of patients for whom they served as the attending or operating physician. A patient's hospitalization counted only once per physician per hospital. If a tie occurred between two hospitals, the physician was assigned to the hospital with the longest time between first and last admission to favor the longitudinal place of practice. When only one admission occurred at each of the tied hospitals, the hospital with the most recent admission was chosen.

Some physicians do not submit any claims for inpatient services either for specific services (Part B) or as the attending or operating physician (Inpatient File). We assigned these physicians to a hospital based upon where their patients received hospital care. For these physicians, we chose the hospital where a plurality of his/her panel of patients (as determined by above panel assignment) had medical admissions. Ties were broken by longest time between admissions or most recent admit as above. There were some physicians who did not get assigned to a hospital using the previous two methods either because their panel was so small no hospitalizations occurred or because they were subspecialists and did not have patients assigned. These physicians were assigned to the hospital where the plurality of all patients who had visits with the physician had medical admissions.

All physicians who were assigned to a given hospital were defined as the EMS for that hospital. Each physician was assigned to only one hospital creating a unique set of affiliated physicians for each hospital. Approximately a third of physicians worked at more than one hospital, but, as in prior studies, we found that even those who worked at more than one hospital had most of their admissions at their predominant hospital (Miller, Welch, and Welch 1996). By our definition, the EMS includes physicians with hospital privileges and physicians without formal ties but for whom that hospital is the predominant site of inpatient care for their patients.

Validation of Provider Specific Cohorts

Face Validity Methods

We report the characteristics of the patients, the EMS, and the hospital to determine if the empiric approach generates expected distributions of characteristics among hospitals of differing size defined by the number of Medicare admissions. To characterize the degree to which the hospital serves the assigned cohort we also report the percent of medical and surgical admissions for a hospital-specific cohort that occur at the assigned hospital and physician billings that occur with the assigned hospital's EMS.

Discriminant Validity Methods

Hospitals in a single urban area often serve different functions and therefore have different characteristics. We tested whether our cohort method had adequate discriminant validity to differentiate four hospitals in Boston through the characteristics of their patients and EMS. The four hospitals have differing self-described missions. Massachusetts General Hospital (MGH) is a large tertiary care hospital that “offers … virtually every specialty and subspecialty of medicine and surgery.” Boston Medical Center (BMC) is historically a public hospital and describes itself as the “largest safety net hospital in New England.” Faulkner Hospital is a “nonprofit community teaching hospital” and New England Baptist Hospital (NEBH) is an “adult medical/surgical hospital with nationally recognized expertise in orthopedic care.” We expected MGH and BMC to have the most physicians. And MGH would have the largest breadth of specialists while BMC would have a relatively poorer patient population. We expected Faulkner Hospital to be staffed mostly by generalists while NEBH ought to have a greater concentration of orthopedic surgeons and anesthesiologists.

Predictive Ability Methods

To test the predictive validity of the APSCs, we evaluated whether the difference between observed versus expected costs by provider group were predictive of future costs differences. To create a price-standardized measure, total costs were measured using total RVU inputs from the Carrier File plus DRG inputs from MedPAR. The expected costs were modeled using age, gender, race, Medicaid status, illness level (using Iezzoni comorbidities), and zip code level education and income. A standardized cost difference was calculated by dividing observed minus expected costs by the standard error to reduce the influence of small sample size and create less biased estimates of adjusted costs (Thomas, Grazier, and Ward 2004). APSCs with fewer than 50 people in them were excluded from this cost analysis because the small numbers of enrollees leads to unstable cost estimates. Enrollees were excluded from the calculation if they died before January 15 of the evaluation year, if they were in managed care, or not fully Medicare Part A and B entitled during the year of analysis.

The standardized cost difference was calculated in half the sample for calendar year 1999 (within the assignment period) and in the second half of the sample for calendar year 2001 (1 year after assignment). The standardized cost difference is approximate to a t-statistic and therefore can be interpreted as the number of standard deviations away from the mean difference between observed and expected costs. We tested the correlation of the standardized cost difference in the two time periods using a Pearson coefficient. We ranked APSCs by the standardized difference and categorized them into quintiles from highly negative to highly positive cost difference. We then calculated the agreement in quintile of cost difference between the assignment period and the follow-up period using a weighted κ. Finally, we calculated directly adjusted standardized costs in each APSC in 1999 by setting each covariate to the mean value for the nation and grouped the APSCs into quintile of costs then evaluated predicted costs in 2001 across these quintiles.

RESULTS

Assignment Characteristics

APSCs of Medicare enrollees older than 60 were created for 4,366 hospitals within the United States. The linkage of a beneficiary to a physician who is subsequently linked to a hospital defined a population-based cohort of Medicare enrollees served by a hospital and its EMS. The APSC cohorts represent 96 percent (N =4.1 million) of eligible enrollees older than 65 who had an ambulatory visit in 1998. The 4 percent of enrollees excluded from analysis had a predominant ambulatory physician who could not be assigned to a hospital or was assigned to an ineligible hospital. The predominant ambulatory physician was a generalist for 77 percent of enrollees, a medical specialist for 21 percent, and other types of physician for the last 2 percent of enrollees.

Ninety-four percent of physicians with a known specialty that billed Medicare in the study period were assigned to a specific hospital's EMS. We were able to assign 58 percent of eligible physicians through their hospital billing and 14 percent through the admission of their patient panel. The remaining 34 percent of physicians had very few patients assigned to them (1 percent of total enrollee sample) and had no direct hospital billing. By tracking the admissions of any patient for whom these physicians billed, we were able to assign all but 6 percent of physicians to a hospital. Overall, through the linkage of patient to physician and to hospital, we find that 99 percent of cohort patients are assigned to only 66 percent of physicians because the other 34 percent of practicing physicians either have small Medicare practices with no admissions or are in a specialty that makes it unlikely for them to be the predominant provider (see Appendices B, C, and D for details of assignment and sample).

Face Validity Results

At the hospital level, the characteristics of the ambulatory cohorts were similar across different sized hospitals (Table 1). Although a higher proportion of patients served by large hospitals were nonwhite, age, gender, and distributions of chronic conditions were similar. In contrast, the type and number of affiliated physicians differed substantially by hospital size, as expected. There was a “dose–response” relationship between hospital size and both the number of physicians overall and number and types of specialists assigned to the hospital (Table 1). The face validity of the designation of affiliated physicians is further strengthened by the strong correlation between number of physicians and beds (r =0.81) and number of surgeons and anesthesiologists (r =0.79).

Table 1.

Characteristics of Hospitals, Their Extended Medical Staffs, and Assigned Medicare Enrollees, Stratified by Hospital Size*

Large > 5,000 Admits Medium 5,000 <Admits > 750 Small <750 Admits
Number of hospitals 570 2,507 1,262
Number of patients in sample 1.63 million 2.27 million 282,817
Number of patients assigned per hospital 2,783 (1,975, 3,300) 890 (471, 1,221) 188 (85, 274)
Hospital characteristics from AHA
 Member of COTH (%) 28.8 4.3 0.2
 Number of staffed beds 463 (310, 563) 170 (88, 221) 66 (30, 76)
 Nurses per staffed bed 1.3 (1.0, 1.6) 1.1 (0.8, 1.3) 0.7 (0.4, 0.9)
 Hospitals with residents (%) 55.3 28.8 11.5
 Residents per 100 beds (only hospitals with any residents) 24.1 (6.1, 30.6) 11.9 (1.9, 10.5) 8.7 (1.5, 5.0)
Location (% of hospitals)
 Major metropolitan area 94 49 15
 Suburban area 0.5 6 8
 Large town 5 23 8
 Small town or rural 0.5 22 69
Affiliated medical staff
 Number of MD/DO assigned 354 (217, 437) 101 (40, 137) 15 (6, 19)
MD/DO assigned per 100 beds
 Total 79 (57, 94) 62 (38, 79) 31 (12, 43)
 Generalists 23.6 (16.4, 28.4) 24.3 (14.9, 30.6) 18.3 (7, 25)
 General practice 2.2 (0.8, 2.9) 2.7 (0.8, 3.7) 2.3 (0, 3.3)
 Family practice 8.5 (4.3, 11.2) 11.5 (5.2, 15) 11.1 (3.4, 15.6)
 Internal medicine 12.7 (7.2, 16.1) 10.0 (5.1, 12.9) 4.8 (0, 7.1)
 Medical specialists 19.1 (13.2, 23.5) 11.7 (5.3, 16.1) 4.1 (0, 5.7)
 Cardiology 3.5 (2.2, 4.4) 1.7 (0, 2.6) 0.3 (0, 0)
 Gastroenterology 1.6 (1.0, 2.2) 0.9 (0, 1.4) 0.1 (0, 0)
 Pulmonary 1.2 (0.7, 1.6) 0.6 (0, 1.0) <0.1 (0, 0)
 Rheumatology 0.6 (0.2, 0.8) 0.2 (0, 0.3) <0.1 (0, 0)
 Oncology 1.3 (0.7, 1.6) 0.6 (0, .8) 0.1 (0, 0)
 Surgical specialists 20.6 (14.6, 25) 14.9 (8.3, 19.6) 5.7 (0, 7.1)
 General surgery 3.3 (2.2, 4.0) 3.3 (1.9, 4.2) 1.9 (0, 2.9)
 Thoracic surgery 0.6 (0.2, 0.8) 0.2 (0, 0.2) <0.1 (0, 0)
 Neurosurgery 0.8 (0.5, 1.1) 0.3 (0, 0.5) <0.1 (0, 0)
Other
 Radiology 3.9 (2.7, 4.6) 2.7 (1.4, 3.8) 0.8 (0, 0.9)
 Pathology 1.8 (1.1, 2.2) 1.2 (0, 1.8) 0.2 (0, 0)
 Anesthesia 4.8 (3.2, 5.9) 3.1 (1.2, 4.4) 0.7 (0, 0)
Patient characteristics
Demographics
 Age (mean) 75.4 (74.9, 75.9) 75.8 (75.2, 76.4) 76.4 (75.5, 77.3)
 Female (%) 61 (60, 63) 62 (60, 65) 62 (58, 66)
 Nonwhite race (%) 8 (4, 16) 5 (2, 17) 2 (0, 12)
Presence of specific conditions
 Diabetes mellitus (%) 21 (19, 24) 22 (19, 25) 20 (17, 25)
 Congestive heart failure (%) 15 (13, 17) 16 (14, 19) 17 (14, 22)
 Coronary artery disease (%) 14 (12, 17) 13 (11, 17) 11 (8, 14)
 Chronic obstructive pulmonary disease (%) 16 (15, 19) 18 (15, 21) 17 (13, 21)
 Peripheral vascular disease (%) 8 (7, 9) 8 (10, 6) 7 (4, 9)
 Dementia (%) 8 (7, 9) 9 (7, 11) 10 (6, 12)
 Cancer, solid tumors (%) 17 (15, 19) 14 (12, 16) 11 (9, 14)
Number of chronic conditions
 None (%) 41 (39, 44) 41 (37, 45) 44 (39, 49)
 1 only (%) 31 (30, 32) 30 (28, 32) 28 (27, 32)
 2 only (%) 15 (14, 16) 15 (14, 16) 15 (12, 17)
 3 or more (%) 13 (11, 15) 13 (10, 16) 12 (8, 15)

Mean and 25th and 75th percentile unless otherwise specified.

*

Hospital size defined by the annual number of Medicare admissions.

Based on RUCA codes.

COTH, Council of Teaching Hospitals.

Approximately a third of physicians who have hospital billing do so at more than one hospital. For medical and surgical subspecialists the rates are higher than for generalist physicians (43 versus 31 percent). Among the physicians who bill at two or more hospitals, 72 percent of their hospital work is at their primary hospital and 95 percent at the primary or secondary hospital. On average among all physicians who have any hospital billing, 90 percent of their inpatient billing occurs at their primary hospital.

Assigned patients receive the major fraction of their care, inpatient and physicians services, from their assigned hospital and its EMS, especially at large and medium hospitals (Table 2). On average, two-thirds of medical admissions and all physician billing for an assigned cohort are generated by their assigned hospital or its EMS. This fraction of care increases to greater than 75 percent if we limit analysis to assigned people who also live within the geographic hospital service area. The fraction of care at the assigned hospital, which we call loyalty, is much lower for surgical admissions at small hospitals than at large and medium hospitals, in all likelihood due to the narrower spectrum of services offered at small hospitals. In years after the assignment period (3 and 4 years after the index visit), the medical admission loyalty remains high but declines on average by 10 percent per year with less decline for surgical admissions and physician billing loyalty. The analyses of subsequent years include all initially assigned patients, and thus do not take into account patient relocation or choice of a different physician.

Table 2.

Measures of Patient Loyalty to Assigned Hospital

Large > 5,000 Admits* Medium 5,000 < Admits > 750* Small < 750 Admits*
Loyalty to primary hospital
 Percent of medical admissions at primary hospital 67 (59, 75) 69 (60, 79) 66 (58, 78)
 Percent of surgical admissions at primary hospital 64 (56, 73) 49 (39, 60) 31 (19, 42)
 Percent of MD E&M claims from affiliated staff of primary hospital 71 (64, 78) 67 (60, 74) 57 (50, 66)

Loyalty is defined as the percent of all admissions or claims provided at the hospital to which the patient was assigned or the percent of E&M claims provided by its extended medical staff. Analysis restricted to hospitals with at least three admissions or 10 people assigned. Mean and 25th and 75th percentile unless otherwise specified.

*

Hospital size defined by the annual number of Medicare admissions.

Discriminant Validity Results

As a further test of validity we compared the populations and physician staffs of four hospitals in Boston: a tertiary care hospital (MGH), a public hospital (BMC), a community teaching hospital (Faulkner Hospital), and a community hospital with an orthopedic focus (NEBH) (Table 3). MGH had the largest number of physicians spanning 41 specialties, 25 percent more specialties than the next largest hospital we studied. The affiliated physicians included surgical subspecialists such as thoracic surgeons and, likely because of the breadth of services, the cohort had high surgical loyalty. The types of affiliated physicians at BMC were similar to a community hospital but there were many more physicians. Also as expected, patients assigned to BMC were more likely to be poor as shown by the high percentage of Medicaid patients (37 versus 12–16 percent). Of note, the cohort's secondary hospital is the closely affiliated Boston University Hospital, which officially merged with BMC shortly after our study period. Faulkner Hospital is smaller with fewer total physicians but a greater proportion from primary care specialties. This is in contrast to NEBH, which is the same size as Faulkner but has a much greater concentration of orthopedic surgeons and anesthesiologists.

Table 3.

Characteristics of Four Boston Hospitals as Defined by the APSC Method Unless Otherwise Noted

Hospital Name

MGH BMC Faulkner N.E. Baptist
Hospital characteristics
 Number of staffed beds (from AHA) 848 432 127 98
 Number of Medicare admissions per year 13,580 5,947 2,464 2,430
 Number of patients assigned 3,339 1,227 1,065 641
Characteristics of assigned patient cohort
 Two or more chronic conditions (%) 31 32 34 35
 Medicaid eligible (%) 12 37 16 15
 Lives in HSA of hospital (%) 43 70 77 38
Affiliated physician characteristics
 Total number MDs assigned 1,268 611 169 172
 Number of different specialties 41 33 29 28
Number of physicians assigned by specialty (% of staff)
 Primary care 329 (26) 253 (41) 69 (41) 37 (22)
 Family practice 12 (<1) 24 (4) 4 (2) 4 (2)
 Internal medicine 308 (24) 214 (35) 63 (37) 32 (19)
 Medical specialists 290 (23) 149 (24) 38 (22) 29 (17)
 Cardiology 52 (4) 26 (4) 7 (4) 8 (5)
 Surgical specialists 249 (20) 103 (17) 35 (21) 65 (38)
 General 42 (3) 20 (3) 8 (5) 6 (3)
 Thoracic 10 (<1) 4 (<1) 1 (<1) 1 (<1)
 Orthopedic 53 (4) 11 (2) 7 (4) 42 (24)
 Anesthesiology 98 (8) 27 (4) 4 (2) 18 (10)
Loyalty measures
 Percent medical admits at primary hospital 66 62 72 58
 Secondary hospital for medical and surgical admissions Beth Israel Boston University Hospital Brig & Women's Beth Israel
 Percent medical admits at secondary hospital 3 10 5 6
 Percent surgical admits at primary hospital 79 56 52 55
 Percent surgical admits at secondary hospital 2 7 19 16
 Percent physician (E&M) claims at primary hospital 62 61 61 56

MGH, Massachusetts General Hospital; BMC, Boston Medical Center; N.E. Baptist, New England Baptist Hospital; Brig & Women's, Brigham and Women's Hospital.

Predictive Ability Results

The standardized cost difference, a measure of whether a provider group as defined by the APSCs uses more or less resources than expected to care for the population it serves, is highly correlated with the future cost differences for their patients. Standardized cost differences ranged from −3.5 to +4.8 in 1999 and −2.2 to 4.1 in 2001. The standardized cost difference among APSCs were highly correlated between the assignment period and the follow-up period (r =0.87, p <.0001). This was the most conservative analysis using both different patients as well as different time periods. APSCs were ranked according to their standardized cost difference and grouped into quintiles. APSCs in Quintile 1 had costs most below expected and in Quintile 5 costs were most above expected. There is substantial agreement between quintile designation in 1999 and 2001 as shown by a weighted κ =0.65 (p <.0001) (Landis and Koch 1977). Figure 1 shows the distribution of costs for an average patient in 2001 within the APSC classified by their 1999 cost quintile and shows the stepwise increase across these quintiles in 2001 costs, ranging from $4,046 to $5,910.

Figure 1.

Figure 1

Predicted Directly Adjusted Standardized Costs* in 2001 for across Ambulatory Provider Specific Cohorts (APSC) Grouped into Quintiles of Increasing 1999 Standardized Costs.

All APSCs with greater than 50 people assigned were included using separate models and a different half of the cohort in each period to calculate estimates. (Boxes represent 25th, 50th, and 75th percentiles, vertical whiskers extend to 1st and 99th percentiles, circles are individual outliers.) *Standardized costs measured by total DRG+RVUs and adjusted for age, gender, race, county-level income, county-level education, and comorbidities identified in previous years claims.

DISCUSSION

We used Medicare claims data to assign Medicare enrollees to physicians and to assign both enrollees and physicians to hospitals. Because earlier work had suggested that most physicians concentrate their inpatient work at a single hospital (Miller et al. 1996) we hoped to define relatively stable populations of ambulatory patients that would allow fair comparisons of performance on multiple domains across individual institutions and their providers. We were able to assign over 94 percent of physicians to a hospital and found that the resulting EMS assignments appeared to make sense. The size and characteristics of the medical staffs affiliated with hospitals corresponded to our expectations based on size and type of hospital. We also found that virtually all Medicare enrollees could be assigned to a predominant care hospital, and most of the medical care provided to the patient cohorts—whether defined on the basis of medical admissions or physician visits—was in fact delivered by the hospital and medical staff to which the patients were assigned. Finally, we found that the costs for a provider-specific cohort were highly predictive of future costs for the same provider group managing different patients in a later time period.

We developed the APSC approach to address the limitations of traditional geographic small area analysis and the two previously available hospital-specific approaches. Traditional small area analysis using distinct geographic markets for health care has the advantage of comparing resource inputs, utilization and outcomes across the entire resident populations of the health care markets but does not identify responsible provider groups. Current hospital-specific approaches, in which patients are assigned to a given hospital based upon their hospital utilization either for a specific incident hospitalized illness or for seriously ill patients who are followed back from death, resolve the accountability problem but have other limitations (Fisher et al. 2003a; Wennberg et al. 2004). First, because the cohorts only include individuals who experience a hospitalization, variations in the underlying propensity to hospitalize patients could introduce a bias against highly efficient systems where some patients may not be hospitalized. Second, although one can still compare utilization rates of preventive services or elective surgery across hospitals using these cohorts (Skinner et al. 2003), whether findings based upon these hospitalized cohorts can be generalized to the outpatient populations served by these institutions is uncertain. The APSC method addresses some of the limitations of the traditional geographic small area analysis and hospital-specific approaches. By grouping the physicians who use a specific hospital, we have defined a physician group that could potentially be held accountable for both quality and resource use—while maintaining the important link with the hospital—the major resource input in any local care system. And our use of ambulatory rather than inpatient data allow us to assign 96 percent of Medicare fee-for-service enrollees to a predominant care physician and over 99 percent of these patients to a hospital creating cohorts that closely approximate the general Medicare population.

Using the APSC cohorts, we can measure population-based utilization rates and address important previously unresolved questions. For instance, we can now evaluate the end-of-life experience for a population served by a physician group rather than only those who have any use of the hospital. Similarly, we can compare the rate of surgical procedures such as CABG and elective joint replacements as a function of the total population served. In both examples, including the nonhospitalized subjects allows a comparison of the propensity to use hospital-based services in addition to evaluating the outcomes for people who ultimately receive the service. Another example of the potential use of the APSC method is the ability to study the impact of fragmentation of care across hospital systems on outcomes and its converse, continuity as an indicator of performance.

Several limitations to this approach must be considered. First, most of the analysis and assignments were based upon only a 20 percent sample of Medicare fee-for-service enrollees. Our findings—and subsequent studies comparing performance using these cohorts—therefore apply only to fee-for-service Medicare. The 20 percent sample also limits the number of patients assigned, particularly to smaller hospitals. The small numbers at small hospitals makes reliability of assignment lower. In addition, studies of infrequent events, such as uncommon surgical procedures, would have limited power to detect differences across hospitals. This problem could be addressed by using 100 percent of Part B claims or combining multiple years of data (as rates of surgery tend to be relatively stable over time) (Weinstein et al. 2004).

Second, loyalty of the cohort to its providers is high but is not perfect and declines over time as the number of years from the initial assignment period increase. Our measure of loyalty is a highly conservative estimate because we do not use patients' hospital experience directly to determine assignment and we have not excluded from the measure the patients and physicians who move or leave the Medicare fee-for-service system. We did a sensitivity analysis and found that several factors increase the measurement of loyalty including whether it is averaged at the individual rather than provider level and whether patient residence in the area of the hospital is taken into account. We chose to report the most conservative (lowest) measure of provider-averaged loyalty without regard to place of residence. The decline over time is likely largely due to relocation of patients and physicians. To improve the accountability of the provider group in future studies, we would reassign Medicare enrollees and physician providers annually. This would allow analysis of the provider group at the hospital level with some minor changes in patient and physician composition as a result of patient relocation, patients entering or leaving fee-for-service Medicare, physician relocation, or physician entry or exit from practice.

Our decision to assign physicians to a single hospital—and to assign physicians who do no actual inpatient work to a specific hospital—also deserves comment. The major reason for assigning physicians uniquely to their predominant hospital is to allow us to assign their patients to the hospital where they would most likely be admitted. As even among physicians who work at multiple hospitals, over 70 percent of their work on average is at their primary hospital, this is likely to be the best approach for patient assignment. For determining the physician staff of a hospital, a reasonable alternative would be to assign the appropriate fraction of the physician to each hospital. Requiring physicians to have an exclusive affiliation with a single hospital, however, allows us to define unique physician groups that could plausibly be held accountable for the resource use of their patients. Moreover, although some of the physicians may have no formal affiliation, most are likely to be members of the medical staff of their assigned hospital (given their inpatient billing). This medical staff organization would be a logical locus of accountability not only for inpatient costs, but for the longitudinal costs and quality experienced by the assigned patient population (Welch and Miller 1994).

In summary, we have described a method for using individuals' Medicare billing data to assign them to the hospital and its affiliated physicians from whom they receive care. These population-based cohorts have adequate loyalty to their institutions to allow analysis of performance measures and studies of continuity and costs.

Acknowledgments

None

Disclosures: No conflicts of interest

Disclaimer: Funding support was provided by the NIA P01 AG19783 and Pfizer/American Geriatric Society Foundation for Health in Aging Junior Faculty Scholars Program for Research on Health Outcomes in Geriatrics (Bynum). Funding agencies had no role in the design, method, analysis, or preparation of this manuscript.

Supplementary Material

The following supplementary material for this article is available online:

APPENDIX A

Data Files Used in Analysis.

hesr0042-0045-s1.pdf (115.6KB, pdf)
APPENDIX B

Summary of Sample Sizes.

hesr0042-0045-s2.pdf (21.8KB, pdf)
APPENDIX C

Flow Diagram for Development of Patient-Level Cohort with Assignment to Predominant Ambulatory Physician.

hesr0042-0045-s3.pdf (102.6KB, pdf)
APPENDIX D

Flow Diagram for Development of Ambulatory Hospital-Specific Cohort through Assignment of Physicians to Hospitals.

hesr0042-0045-s4.pdf (101.4KB, pdf)

REFERENCES

  1. Baicker K, Chandra A, Skinner JS, Wennberg JE. Who You Are and Where You Live: How Race and Geography Affect the Treatment of Medicare Beneficiaries. Health Affairs: Suppl. 2004 doi: 10.1377/hlthaff.var.33. Web Exclusive. VAR 33–44. [DOI] [PubMed] [Google Scholar]
  2. Berwick D, James B, Coye MJ. Connections between Quality Measurement and Improvement. Medical Care January. 2003;41(1 suppl.):I-30–8. doi: 10.1097/00005650-200301001-00004. [DOI] [PubMed] [Google Scholar]
  3. Boston Medical Center. [accessed on February 15]. Available at http://www.bmc.org.
  4. Dartmouth Medical School. The Dartmouth Atlas of Health Care, 1998. Chicago: American Hospital Publishing; 1998. Center for the Evaluative Clinical Sciences. [PubMed] [Google Scholar]
  5. Faulkner Hospital. [accessed on February 15, 2005]. Available at http://www.faulknerhospital.org/about_fh.html.
  6. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ. Variations in the Longitudinal Efficiency Academic Medical Centers. Health Affairs. 2004;(Suppl.) doi: 10.1377/hlthaff.var.19. Web Exclusive. VAR 19–32. [DOI] [PubMed] [Google Scholar]
  7. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The Implications of Regional Variation in Medicare Spending. Part 1: The Content, Quality, and Accessibility of Care. Annals of Internal Medicine. 2003a;138(4):273–87. doi: 10.7326/0003-4819-138-4-200302180-00006. [DOI] [PubMed] [Google Scholar]
  8. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The Implications of Regional Variation in Medicare Spending. Part 2: Health Outcomes and Satisfaction with Care. Annals of Internal Medicine. 2003b;138(4):288–98. doi: 10.7326/0003-4819-138-4-200302180-00007. [DOI] [PubMed] [Google Scholar]
  9. Fisher ES, Wennberg JE, Stukel TA, Sharp SM. Hospital Readmission Rates for Cohorts of Medicare Beneficiaries in Boston and New Haven. New England Journal of Medicine. 1994;331(15):989–95. doi: 10.1056/NEJM199410133311506. [DOI] [PubMed] [Google Scholar]
  10. Harris R, Lohr KN. Screening for Prostate Cancer: An Update of the Evidence for the U.S. Preventive Services Task Force. Annals of Internal Medicine. 2002;137(11):917–29. doi: 10.7326/0003-4819-137-11-200212030-00014. [DOI] [PubMed] [Google Scholar]
  11. Landis JR, Koch GG. The Measurement of Observer Agreement for Categorical Data. Biometrics. 1977;33(1):159–74. [PubMed] [Google Scholar]
  12. Massachusetts General Hospital. [accessed on February 15, 2005]. Available at http://www.massgeneral.org/news/for_reporters/overview.htm.
  13. Miller ME, Welch WP, Welch HG. The Impact of Practicing at Multiple Hospitals on Physician Profiles. Medical Care. 1996;34(5):455–62. doi: 10.1097/00005650-199605000-00007. [DOI] [PubMed] [Google Scholar]
  14. New England Baptist Hospital. [accessed on February 15, 2005]. Available at http://www.nebh.caregroup.org/default.asp?node_id=3598.
  15. Roos NP. Linking Patients to Hospitals: Defining Urban Hospital Service Populations. Medical Care. 1993;31(5 suppl):YS6–15. doi: 10.1097/00005650-199305001-00003. [DOI] [PubMed] [Google Scholar]
  16. Rosenthal MB, Fernandopulle R, Song HR, Landon B. Paying for Quality: Providers' Incentives for Quality Improvement. Health Affairs. 2004;23(2):127–41. doi: 10.1377/hlthaff.23.2.127. [DOI] [PubMed] [Google Scholar]
  17. Skinner J, Weinstein JN, Sporer SM, Wennberg JE. Racial, Ethnic, and Geographic Disparities in Rates of Knee Arthroplasty among Medicare Patients. New England Journal of Medicine. 2003;349(14):1350–9. doi: 10.1056/NEJMsa021569. [DOI] [PubMed] [Google Scholar]
  18. Thomas JW, Grazier KL, Ward K. Economic Profiling of Primary Care Physicians: Consistency among Risk-Adjusted Measures. Health Services Research. 2004;39(4, part 1):985–1004. doi: 10.1111/j.1475-6773.2004.00268.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. U.S. Department of Health and Human Services. 2005 doi: 10.3109/15360288.2015.1037530. “Hospital Compare” [accessed on April 26, 2005]. Available at http://www.hospitalcompare.hhs.gov/ [DOI] [PubMed]
  20. Weinstein JN, Bronner KK, Morgan TS, Wennberg JE. Trends and Geographic Variations in Major Surgery for Degenerative Diseases of the Hip, Knee, and Spine. Health Affairs. 2004;(Suppl.) doi: 10.1377/hlthaff.var.81. Web Exclusive. VAR 81–9. [DOI] [PubMed] [Google Scholar]
  21. Welch WP, Miller ME. Proposals to Control High-Cost Hospital Medical Staffs. Health Affairs. 1994;13(4):42–57. doi: 10.1377/hlthaff.13.4.42. [DOI] [PubMed] [Google Scholar]
  22. Wennberg JE, Fisher ES, Stukel TA, Skinner JS, Sharp SM, Bronner KK. Use of Hospitals, Physician Visits, and Hospice Care during Last Six Months of Life among Cohorts Loyal to Highly Respected Hospitals in the United States. British Medical Journal. 2004;328(7440):13. doi: 10.1136/bmj.328.7440.607. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

APPENDIX A

Data Files Used in Analysis.

hesr0042-0045-s1.pdf (115.6KB, pdf)
APPENDIX B

Summary of Sample Sizes.

hesr0042-0045-s2.pdf (21.8KB, pdf)
APPENDIX C

Flow Diagram for Development of Patient-Level Cohort with Assignment to Predominant Ambulatory Physician.

hesr0042-0045-s3.pdf (102.6KB, pdf)
APPENDIX D

Flow Diagram for Development of Ambulatory Hospital-Specific Cohort through Assignment of Physicians to Hospitals.

hesr0042-0045-s4.pdf (101.4KB, pdf)

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES