Teaching hospitals are widely reputed to provide high-quality care, eliciting very positive public opinions in surveys across the United States (Boscarino 1992). The U.S. News and World Report's listing of “America's Best Hospitals” (2000), based in part on the opinions of academic and community physicians, highly ranks many major teaching hospitals. These public and professional views may reflect features of teaching hospitals that are perceived to foster a higher quality of care, including the treatment of rare diseases and complex patients, the provision of specialized services and advanced technology, and the conduct of biomedical research (Neely and McInturff 1998). Some services, such as specialized surgery and bone marrow transplants, are provided predominantly at teaching hospitals (Levin, Moy, and Griner 2000). Other distinctive missions of teaching hospitals include medical education and training, innovations in clinical care, and treatment of indigent patients, particularly at public teaching hospitals (Blumenthal, Weissman, and Campbell 1997).
Despite their reputation for highly specialized care and for treating rare diseases and severely ill patients, teaching hospitals in fact rely heavily on income from more routine services, such as the care of heart disease, pneumonia, and stroke (Association of American Medical Colleges 1998). They therefore may justify their comparatively higher charges for these clinical services by claiming that they provide better care than other hospitals do. It is possible, however, that for common conditions, teaching hospitals may offer a lower quality of care than do nonteaching hospitals, particularly if the substantial involvement of inexperienced trainees and the attenuated role of senior physicians in teaching hospitals results in more fragmented and less appropriate care. Both purchasers and patients have an interest in knowing whether teaching hospitals provide added value through a higher quality of care or whether services of comparable quality could be obtained at a lower cost in nonteaching hospitals.
Most studies have shown that care costs more at teaching hospitals than at nonteaching hospitals (Iezzoni et al. 1990; Mechanic, Coleman, and Dobson 1998; Taylor, Whellan, and Sloan 1999; Whittle et al. 1998; Zimmerman et al. 1993). Historically, teaching hospitals have offset some of the costs of their research and teaching programs by charging more for care. Private payers have paid higher prices, and since 1983 the Medicare program has supported the extra costs associated with medical education and other academic missions through supplemental payments to teaching hospitals.
During the past decade, however, because competition and managed care have limited the prices they can charge, major teaching hospitals have had more difficulty recovering their extra costs from private payers (Blumenthal and Meyer 1996; Freburger and Hurley 1999; Reuter and Gaskin 1997). Even when insurers have included major teaching hospitals in their networks to treat both complex and routine problems (Blumenthal, Weissman, and Griner 1999; Kowalczyk 2000), they have applied intense pressure for prices comparable to those of other providers of similar services (Blumenthal and Weissman 2000). Then, aggravating this situation, in the 1997 Balanced Budget Act, the federal government reduced the supplemental Medicare payments to teaching hospitals (Guterman 1998; Iglehart 1999).
These changes have created stresses on the multiple special missions of major teaching hospitals. If teaching hospitals do provide better-quality care, this may justify their higher charges. If not, more direct means may need to be found to subsidize their special missions. Otherwise, those missions may suffer, with negative consequences for society. Consequently, the question of the comparative quality of care in teaching and nonteaching hospitals is of considerable importance.
If managed care and competition continue to maintain pressure for containing costs in the U.S. health care system, assessments of whether patients derive added benefits from treatment in teaching hospitals may become linked to the payments that hospitals receive. Some insurers, for example, have recently proposed charging higher premiums and copayments to purchasers and patients who want access to major teaching hospitals (Kowalczyk 2001). To guide policymakers, purchasers, providers, and researchers, we reviewed studies that compare the quality of care in teaching and nonteaching hospitals. Based on this review, we suggest future directions for research on the quality of care in teaching and nonteaching hospitals to inform policy decisions in the public and private sectors.
Methodological Issues in Comparing Quality of Care by Hospital Teaching Status
In evaluating the relative quality of care in teaching and nonteaching hospitals, we considered several methodological issues. The first is the definition of a hospital's “teaching” status. Research studies usually define a major teaching hospital as (1) belonging to the Council of Teaching Hospitals (COTH) of the Association of American Medical Colleges; (2) offering a specified ratio of interns and residents to beds, ranging from more than 0.10 to more than 0.27; or (3) being designated as a flagship hospital or major affiliate of a medical school (i.e., academic health or medical center). Other teaching hospitals are defined as all other teaching hospitals not meeting the criteria for major teaching status. Some studies do not make this distinction, however, instead defining “teaching hospitals” by the presence of any residency program or affiliation with a medical school. In 1997, 115 of U.S. hospitals were part of academic health centers; 222 were major teaching hospitals; 606 were minor teaching hospitals; and 3,968 were nonteaching hospitals (Commonwealth Fund Task Force on Academic Health Centers 2000). Some studies further characterize teaching and nonteaching hospitals by their status as private or public hospitals and for-profit or not-for-profit hospitals. Studies of quality of care that distinguish major teaching hospitals from other teaching hospitals are more valuable to policymakers and health care purchasers because of the much greater role of major teaching hospitals in education and research. Studies that assess large numbers of hospitals in multiple states or regions are the most useful for guiding policy.
A second methodological issue is the source of the study data. All studies comparing the quality of care in teaching and nonteaching hospitals rely on observational data, in which the processes and outcomes of care are studied in actual clinical practice rather than through randomized trials. Observational data are usually derived from either medical records or administrative data (typically hospital discharge abstracts or Medicare claims). Medical records provide more clinically detailed data about the severity of patients’ illnesses and about their care than do administrative databases. Because medical records are costly and time-consuming to review, studies using them usually have smaller samples and thus may be less generalizable than are studies using administrative data. In addition, research based on either of these sources of observational data may be subject to selection bias or unrecognized confounding if unmeasured factors, such as patients’ preferences or adherence to treatment recommendations, are different in teaching and nonteaching hospitals.
A third methodological issue is the process or outcome measure used to judge the quality of care and the links between these two measures. Process measures are particularly useful when they have been shown in randomized clinical trials to affect outcomes, such as the use of specific drugs to reduce the mortality rates of patients with myocardial infarction or congestive heart failure (Mant and Hicks 1995). In rigorous observational studies, researchers may also examine those process measures that have been associated with improved outcomes (Kahn et al. 1990; Rubenstein et al. 1990), or they may use outcome measures, such as negligent adverse events, that are closely related to antecedent processes of care (Brennan et al. 1991). When mortality is the primary outcome of interest, it is important to know whether a lower mortality rate can be attributed to improved processes of care.
A fourth related issue is the validity of the risk-adjustment methods used to compare clinical outcomes such as mortality. In comparative studies of teaching and nonteaching hospitals, accurately controlling for patients’ severity of illness is particularly important because sicker, more complex patients may be concentrated in teaching hospitals. Data from medical records have greater clinical validity as predictors of mortality than do data from administrative records, because medical records are better able to distinguish conditions present on admission from subsequent complications that might be caused by poor care (Iezzoni et al. 1995; Iezzoni et al. 1996). The best-designed studies—based on either medical records or administrative data—use risk-adjustment methods that have been appropriately validated to predict outcomes independent of the hospital's teaching status.
Together, these issues highlight those features of studies that provide the most rigorous assessments of quality of care in teaching and nonteaching hospitals: (1) specific definitions of hospital teaching status, preferably distinguishing major teaching hospitals from other teaching hospitals and nonteaching hospitals, with broadly representative samples from large numbers of hospitals; (2) clinically detailed data from medical records; (3) process measures that have been shown to improve outcomes or outcome measures that are clearly related to underlying processes of care; and (4) validated risk-adjustment methods that account for patients’ severity of illness.
Methods of Reviewing and Assessing Literature
From computerized literature searches using Medline and Ovid, we gathered those potentially relevant research articles published in peer-reviewed academic journals from 1985 through 2001 that contained at least one key word from each of the following two groups: (1) academic medical centers; hospitals, teaching; hospitals, university; hospital characteristics; and (2) quality of health care; quality indicators, health care; outcome and process assessment (health care). From the references in these articles, we also found other relevant articles in peer-reviewed journals that provided comparative data on the quality of care in U.S. teaching and nonteaching hospitals.
We focused on those research articles that studied quality of care in relation to hospital characteristics, with teaching status as either the primary or a major variable of interest. We included studies that reported process measures such as the appropriate use of drugs that have been shown to improve outcomes, or outcome measures such as risk-adjusted mortality rates or preventable adverse events that could be related to processes of care (Donabedian 1966). We excluded those studies that examined only structural measures such as hospital staffing and those that compared resource use (e.g., costs or length of stay) unless they also investigated quality of care using process or outcome measures.
Twenty studies met our inclusion criteria for this review. By assessing the various definitions of teaching status, clinical conditions, quality measures, and statistical methods in these studies, we offer here a review of the studies’ key findings, strengths, and limitations rather than a quantitative synthesis, such as a meta-analysis. Because of the fundamental differences between them, we separated the studies based on medical records from those relying on administrative data. Within each category we summarized and evaluated the studies’ findings based on their definition of teaching status, the generalizability of their study population, and the strength of their quality measures and risk-adjustment methods.
Studies Using Data from Medical Records
Medical and Surgical Care of Adults
One of the earliest and most rigorous studies comparing the quality of care in teaching and nonteaching hospitals evaluated the clinical records of 14,008 Medicare patients admitted to 297 hospitals in five states during the 1980s for congestive heart failure (CHF), acute myocardial infarction (AMI), pneumonia, stroke, or hip fracture (Keeler et al. 1992). This study compared the patients in major teaching hospitals (ratio of interns and residents to number of beds [IRB] ≥ 0.27) with those in other teaching hospitals (IRB < 0.27) and in nonteaching hospitals (IRB = 0) (see table 1). To assess the quality of care, this study used explicit process measures (adherence to specified criteria), implicit reviews (structured subjective assessments of process by physicians), and outcomes (including mortality at 30 days and 180 days after admission). Nurse reviewers collected extensive clinical data from medical records that could be used to adjust for patients’ severity of illness. Both the risk-adjustment models and the explicit and implicit measures of quality of care had been validated as predictors of 30-day and 180-day mortality.
TABLE 1.
Studies Comparing Quality of Care in Teaching and Nonteaching Hospitals Using Medical Records as Data Source
Study | Population | Key Findings |
---|---|---|
Brennan et al. 1991 | 31,429 patients with all diagnoses in 51 New York City hospitals during 1984. | More frequent adverse events in major teaching hospitals than in nonteaching hospitals but less likely due to negligence. |
Keeler et al. 1992 | 14,008 Medicare patients with congestive heart failure, acute myocardial infarction, pneumonia, stroke, or hip fracture in 297 hospitals from 5 states from 1981 to 1982 and 1985 to 1986. | Better overall process measures of quality and lower 30-day mortality in major teaching hospitals than in nonteaching hospitals. |
Zimmerman et al. 1993 | 15,297 patients with all diagnoses in intensive care units of 35 U.S. hospitals from 1988 to 1990. | Lower in-hospital mortality in major teaching hospitals than in other hospitals. |
Pollack et al. 1994 | 5,415 admissions of patients with all diagnoses in national sample of pediatric intensive care units in 16 hospitals from 1989 to 1992. | Adjusted in-hospital mortality rates higher in teaching hospitals than in nonteaching hospitals. |
Horbar et al. 1997 | 7,672 low birth weight infants in neonatal intensive care units of 62 hospitals in Vermont Oxford Network Database during 1991–92. | Similar risk of mortality within 28 days of birth in teaching and nonteaching hospitals. |
Rosenthal et al. 1997 | 89,851 patients with myocardial infarction, congestive heart failure, pneumonia, stroke, obstructive airway disease, or gastrointestinal hemorrhage in 30 hospitals in northeast Ohio from 1991 to 1993. | Lower in-hospital mortality rates in major teaching hospitals for all study diagnoses as a group and for individual diagnoses of congestive heart failure and obstructive airway disease; similar but nonsignificant trend for acute myocardial infarction. |
Ayanian et al. 1998 | 1,767 Medicare patients with congestive heart failure or pneumonia in 571 hospitals in Illinois, Massachusetts, New York, and Pennsylvania during 1991–92. | Better overall quality of care in major teaching hospitals than in nonteaching hospitals by process measures, particularly physicians’ cognitive care and testing; similar quality of therapeutic care; worse quality of nursing care in major teaching hospitals. |
Allison et al. 2000 | 114,411 elderly Medicare patients with acute myocardial infarction in 4,361 U.S. hospitals during 1994–95. | Greater use of aspirin, beta blockers, and ACE inhibiters and lower 30-day mortality rates in major and other teaching hospitals than in nonteaching hospitals; no difference in reperfusion therapy for ideal candidates. |
Thomas, Orav, and Brennan 2000 | 14,700 records of patients with all diagnoses in 28 hospitals in Utah and Colorado during 1992. | Lower rates of preventable adverse drug events in government-owned major teaching hospitals than in other hospitals; similar rates of preventable adverse events in general and related to procedures or diagnoses. |
For all five conditions combined, the overall quality of care based on explicit process measures was moderately and significantly better (effect size = 0.37; i.e., 37% of one standard deviation in quality) in major teaching hospitals than in nonteaching hospitals, and the difference was substantially larger when measured by implicit review (effect size = 0.84). When adjusted for severity of illness using detailed clinical data from medical records, these differences in quality were associated with a statistically significant 3.2 percent absolute reduction in 30-day mortality rates for patients in major teaching hospitals compared with those in nonteaching hospitals (p < 0.001). Other teaching hospitals showed smaller improvements in explicit and implicit measures of quality (effect sizes of 0.22 and 0.39, respectively) and reductions in adjusted mortality rates (1.5%) compared with those in nonteaching hospitals. This study's major strengths were its representative sample for five clinical conditions from many hospitals, its use of well-validated process measures, and its analysis of risk-adjusted mortality rates using detailed clinical data. A minor limitation was the lack of stratified analyses to determine whether the effect of teaching status on quality of care and mortality rates varied by clinical condition.
Similar explicit and implicit process measures were used to analyze the care of 1,767 Medicare patients hospitalized for CHF or pneumonia in Illinois, Massachusetts, New York, and Pennsylvania during 1991 and 1992 (Ayanian et al. 1998). This study examined 71 major teaching hospitals (IRB ≥ 0.25), 172 other teaching hospitals (IRB < 0.25), and 328 nonteaching hospitals (IRB = 0). The adjusted overall quality of care was significantly better (p ≤ 0.01) in major teaching hospitals than in nonteaching hospitals based on both explicit process measures and implicit reviews of CHF (effect sizes of 0.36 and 0.82, respectively) and pneumonia (effect sizes of 0.27 and 0.60). Other teaching hospitals also provided better care than did nonteaching hospitals for CHF (effect sizes of 0.27 and 0.22) and pneumonia (effect sizes of 0.35 and 0.40). In secondary analyses of specific components of care, these quality differences were related to more thorough patient histories, relevant physical examinations, and appropriate diagnostic tests by physicians in teaching hospitals. But therapeutic quality, such as the use of angiotensin-converting enzyme inhibitors for CHF, was similar in teaching and nonteaching hospitals, and the quality of nursing care was lower in major teaching hospitals than in nonteaching hospitals for each condition (effect sizes of −0.26 and −0.34, respectively). This study's limitations were its exclusion of patients who died in the hospital and its lack of outcome measures.
A study of 51 New York State hospitals during 1984 evaluated negligent or preventable adverse events as measures of quality of care. Major teaching hospitals were defined as “flagship” hospitals of medical schools, and other teaching hospitals were identified as medical school affiliates with residency programs (Brennan et al. 1991). Abstracters screened the medical records of 31,429 admissions for adverse events, defined as “injuries caused by medical intervention as distinct from the disease process.” Two physician reviewers then performed implicit reviews to determine whether an adverse event had occurred and whether it was caused by negligence or substandard care. Adjusting for the patients’ age and severity of principal diagnosis and the hospitals’ location, ownership, proportion of minority patients, and number of discharges, the study found that patients in major teaching hospitals were more likely to experience adverse events than were those in nonteaching hospitals (odds ratio [OR]: 2.29, p= 0.02). The authors attributed this finding to the presence of patients with more complex illnesses or receiving more complicated treatments at the major teaching hospitals. Adverse events in major teaching hospitals were less often due to negligence than they were in nonteaching hospitals (OR: 0.26, p= 0.02). There were no differences in adverse events or negligent adverse events between other teaching hospitals and nonteaching hospitals. The strengths of this study were its diverse sample of hospitals and adjustment for multiple hospital characteristics. Its limitations were the only fair reliability of the implicit reviews by physicians and the restriction of risk adjustment to the patients’ principal diagnosis without more detailed measures of severity.
Building on this study of the quality of care in New York State hospitals, a study of 14,700 admissions to 28 hospitals in Utah and Colorado during 1992 used similar methods to assess preventable adverse events (Thomas, Orav, and Brennan 2000). Adjusting for demographic factors and comorbidity from medical records, the study found that patients in two major teaching hospitals (members of COTH) had lower rates of preventable drug events, such as known allergic reactions or drug toxicities, than did patients in nonprofit nonteaching hospitals (OR: 0.37; 95% confidence interval [CI]: 0.16, 0.89). Other teaching hospitals were grouped with government-owned nonteaching hospitals, so their rates of preventable adverse events were not separately reported. This study found no significant difference between major teaching hospitals and nonprofit nonteaching hospitals in overall rates of preventable adverse events or those specifically due to procedures or delayed or incorrect diagnoses or therapies. However, the wide confidence intervals around these estimates suggest that the analyses had limited statistical power.
A prospective study of risk-adjusted resource use and mortality rates studied 15,297 adults in the intensive care units of a national sample of 35 hospitals from 1988 through 1990 (Zimmerman et al. 1993). The intensity of treatment was significantly greater in the 18 teaching hospitals (those affiliated with a major medical school affiliation and with at least five residency programs) than in the 17 nonteaching hospitals, with more frequent use of invasive hemodynamic monitoring, drug infusions, mechanical ventilation, and multiple antibiotics (all p < 0.001). The adjusted in-hospital mortality rates of these two groups of hospitals did not differ when the original definition of teaching status was used. But the mortality rate was significantly higher in the non-COTH than in the 15 COTH hospitals (OR: 1.21; 95% CI: 1.06, 1.38; p= 0.004). A major strength of this study was its use of very detailed clinical data, including vital signs and results of laboratory tests, to adjust for severity of illness, although it did not directly link processes of care to mortality.
A recent study has provided strong evidence of better-quality care for myocardial infarction in major teaching hospitals, based on both process and outcome measures for a national sample of 114,411 Medicare patients hospitalized during 1994 or 1995 (Allison et al. 2000). Using detailed clinical data from medical records, this study compared patients treated at 439 major teaching hospitals (IRB > 0.10), 455 other teaching hospitals (IRB ≤ 0.10), and 3,467 nonteaching hospitals (IRB = 0). In three-way comparisons of patients who were ideal candidates for drugs that had been shown in randomized trials to reduce mortality, patients in major and other teaching hospitals were significantly more likely than patients in nonteaching hospitals to receive aspirin (91.2%, 86.4%, 81.4%, respectively), beta blockers (48.8%, 40.3%, 36.4%), and angiotensin-converting-enzyme inhibitors (63.7%, 60.0%, 58.0%) (all p < 0.001). This study found no significant difference by teaching status, however, in the use of reperfusion therapy with thrombolytic agents or primary coronary angioplasty. The adjusted mortality rate was significantly lower in major teaching hospitals (OR: 0.80; 95% CI: 0.77, 0.84) and other teaching hospitals (OR: 0.91; 95% CI: 0.84, 0.95) than in nonteaching hospitals. About half the lower 30-day mortality in major and other teaching hospitals could be attributed to observed differences in treatment. This study's strengths were its broad national sample, detailed clinical data for risk adjustment, and assessment of both process and outcome measures.
In a cohort of 89,851 patients hospitalized in northeastern Ohio from 1991 through 1993, researchers used detailed clinical data obtained from medical records to analyze in-hospital mortality rates for myocardial infarction, congestive heart failure, pneumonia, stroke, obstructive airway disease, and gastrointestinal hemorrhage (Rosenthal et al. 1997). The study sites were five major teaching hospitals defined by COTH membership, six other teaching hospitals, and 19 nonteaching hospitals. For all six conditions combined, the risk-adjusted in-hospital mortality rate was significantly lower in major teaching hospitals than in nonteaching hospitals (OR: 0.81; 95% CI: 0.66, 0.98). For individual conditions, the adjusted in-hospital mortality rate for patients with obstructive airway disease (OR: 0.56; 95% CI: 0.42, 0.74) and congestive heart failure (OR: 0.71; 95% CI: 0.54, 0.96) was significantly lower in teaching hospitals. In analyses of patients’ characteristics, the mortality rates of men, patients admitted from home (versus nursing homes), patients with do-not-resuscitate orders, and those with a moderately high predicted risk of in-hospital death (25–50%) were lower in major teaching hospitals. No significant differences in in-hospital mortality rates by hospital teaching status were detected for patients with myocardial infarction (OR: 0.78; 95% CI: 0.54, 1.14), gastrointestinal hemorrhage (OR: 0.95; 95% CI: 0.67, 1.34), pneumonia (OR: 0.93; 95% CI: 0.73, 1.20), or stroke (OR: 1.02; 95% CI: 0.71, 1.48). In addition, this study found no differences in the adjusted mortality rates of other teaching hospitals and nonteaching hospitals in the combined analysis of all conditions (OR: 1.09; 95% CI: 0.93, 1.28) or in the analyses of the six individual conditions. This study's strengths were its broad sample of hospitalized adults, analyses of individual conditions, and use of detailed clinical data in validated risk-adjustment models. Its principal limitation was the lack of process measures to explain the differences in mortality rates.
Pediatric and Neonatal Intensive Care
Two studies assessed the outcomes of pediatric and neonatal intensive care by hospital teaching status. A study of 5,415 admissions to 16 pediatric intensive care units from 1989 through 1992 found that adjusted in-hospital mortality rates were higher in eight teaching hospitals (OR: 1.79; 95% CI: 1.23, 2.61), defined as primary sites for the pediatric clerkship of an affiliated medical school (Pollack et al. 1994). In secondary analyses, this adverse effect was traced to the presence of less experienced residents. This study's strengths were its use of detailed physiological data for clinical risk adjustment and the high reliability of the data collection.
Another study that assessed neonatal outcomes by hospital teaching status found no difference in 28-day mortality rates of very low birth weight infants (Horbar et al. 1997). This study considered 7,672 infants with birth weights of 501 to 1,500 grams who were admitted during 1991 and 1992 to neonatal intensive care units at 62 hospitals participating in a national research network. The 24 teaching hospitals were defined as those with a pediatric residency program. Adjusting for Apgar scores, birth weight, prenatal care, antenatal steroid use, gender, race, and hospital volume, the study found that mortality rates did not differ statistically between teaching and nonteaching hospitals (OR: 1.18; 95% CI: 0.94, 1.47; p= 0.15). Although this study controlled for clinical aspects of prenatal care and delivery, it did not adjust for other physiological measures that may be important predictors of the outcomes of intensive care.
Studies Using Administrative Data
Five national or multistate studies based on administrative data compared the mortality rates of patients treated in teaching hospitals with those of those in nonteaching hospitals (see table 2). Four of these studies limited risk adjustment to data from hospital discharge abstracts, and none linked mortality rates to specific clinical processes. The first study, of 3,100 private U.S. hospitals caring for Medicare patients during 1986, found that the 30-day mortality rates for patients in private teaching hospitals (COTH members) were significantly lower than those for patients in private nonteaching hospitals (108 vs. 116 deaths per 1,000 patients, p < 0.001) but that mortality rates did not differ by teaching status among the public hospitals (Hartz et al. 1989).
TABLE 2.
Studies Comparing Quality of Care in Teaching and Nonteaching Hospitals Using Administrative Records as Data Source
Study | Population | Key Findings |
---|---|---|
Hartz et al. 1989 | Medicare patients with all diagnoses in 3,100 U.S. hospitals during 1986. | Lower 30-day mortality rates in private teaching hospitals than in private nonteaching hospitals; no difference in mortality rates by teaching status in public hospitals. |
Fleming et al. 1991 | Medicare patients with all diagnoses in 657 U.S. hospitals during 1985. | Lower mortality rates in nonteaching hospitals than in teaching hospitals. |
Kuhn et al. 1991 | 793,146 records of Medicare patients with all diagnoses in 1,219 hospitals in California, New York, Pennsylvania, Ohio, Illinois, and Texas reviewed during 1987–88. | Fewer problems detected by peer review organizations in teaching hospitals than in nonteaching hospitals for all six states combined. |
Kuhn et al. 1994 | Medicare patients with all diagnoses in 3782 U.S. hospitals during 1988. | Lower mortality rates at 30 and 180 days in private, nonprofit teaching hospitals than in other hospitals. |
Finkelstein et al. 1998 | 16,051 women in 18 Ohio hospitals for obstetrical care between 1992 and 1994. | Patients’ assessments of hospital quality similar for teaching and nonteaching hospitals. |
Whittle et al. 1998 | 22,294 Medicare patients in Pennsylvania with pneumonia during 1990. | Similar 30-day mortality rates for teaching and nonteaching hospitals; higher 90-day mortality rates for teaching hospitals. |
Cunningham et al. 1999 | 7,901 adult patients with HIV/AIDS-related diagnoses in acute care hospitals in California during 1994. | Similar in-hospital mortality rate for teaching and nonteaching hospitals. |
Pearce et al. 1999 | 90,331 patients with carotid endarterectomy, lower extremity bypass grafting, or abdominal aortic aneurysm repair in 835 Florida hospitals between 1992 and 1996. | Similar outcomes of hospital death, myocardial infarction, and cerebrovascular accident in teaching and nonteaching hospitals. |
Schultz et al. 1999 | Patients with acute myocardial infarction in 373 medical-surgical hospitals in California during 1992. | Mortality lowest in limited teaching hospitals, followed by major teaching hospitals, then nonteaching hospitals. |
Taylor, Whellan, and Sloan 1999 | 3,206 Medicare patients with hip fracture, coronary heart disease, stroke, or congestive heart failure in 1,378 U.S. hospitals between 1984 and 1994. | Lower long-term mortality rates overall and for hip fracture in major teaching hospitals than in nonteaching hospitals; similar mortality rates for stroke and congestive heart failure. |
Sloan, Conover, and Provenzale 2000 | 32,593 patients with open or laparoscopic cholecystectomy, stomach operations, intestinal operations, hysterectomy, or hip replacement in 85 North Carolina hospitals during 1995. | More frequent postoperative complications for stomach and intestinal operations, hysterectomy, and hip replacement in teaching hospitals than in nonteaching hospitals. |
A later study of 3,782 U.S. hospitals caring for Medicare patients in 1988 reported a similar difference in the adjusted 30-day mortality rates of patients in private teaching hospitals (COTH members) and those of patients in private nonteaching hospitals (85.4 vs. 91.7 deaths per 1,000 patients, p < 0.001) (Kuhn et al. 1994). This difference was maintained at 180 days (171.2 vs. 176.4 deaths per 1,000 patients, p < 0.05). This study adjusted for the hospitals’ predicted mortality rates derived from the Health Care Financing Administration, the proportion of its patients covered by Medicaid, and the ratio of emergency department visits to the hospitals’ census. This study was limited by the imprecise measures it used for risk adjustment.
A national study of 657 hospitals, using Medicare claims data during 1985, found that the ratio of expected to observed in-hospital deaths was better in nonteaching hospitals than in other teaching hospitals (IRB < 0.25) or major teaching hospitals (IRB ≥ 0.25) (0.95, 0.89, 0.91, respectively) (Fleming et al. 1991). But after excluding length-of-stay outliers in order to compare patients who were more homogeneous, the results were reversed (ratios of 0.99, 1.01, and 1.07, respectively), indicating better outcomes in major teaching hospitals.
In a study of 1,219 hospitals in California, New York, Pennsylvania, Ohio, Illinois, and Texas using administrative data to adjust for severity of illness, problems in the quality of care were identified from routine implicit reviews conducted during 1987 and 1988 by peer review organizations (Kuhn et al. 1991). Teaching hospitals (defined as COTH members) had significantly lower rates of problems than did nonteaching hospitals across all six states combined (2.63% vs. 3.04%, p < 0.01), but the rate of problems varied widely by state, from 0.89 percent in Ohio to 5.08 percent in Illinois—suggesting that the states’ standards of review may also have differed.
The most recent national study, using administrative data, looked at 2,674 frail elderly Medicare patients admitted for hip fracture, coronary heart disease, stroke, and congestive heart failure at 1,378 U.S. hospitals between 1984 and 1994 and followed them through 1995 (Taylor, Whellan, and Sloan 1999). Major teaching hospitals were defined as those with at least 0.097 residents per bed (the median value). Across all the conditions combined, the adjusted mortality rates were lower in major teaching hospitals than in for-profit nonteaching hospitals (hazard ratio: 0.75; 95% CI: 0.62, 0.91), and this difference was highly significant (p= 0.004). Analyses of individual conditions found a significant difference in adjusted long-term mortality rates of major teaching hospitals and for-profit nonteaching hospitals only for hip fracture (hazard ratio: 0.54; 95% CI: 0.37, 0.79). This study did not find significant differences in adjusted mortality rates for patients with stroke (hazard ratio: 0.89; 95% CI: 0.64, 1.24), CHF (hazard ratio: 0.95; 95% CI: 0.64, 1.41), or coronary heart disease (hazard ratio: 0.76; 95% CI: 0.55, 1.07). No significant differences in mortality rates were evident in other teaching hospitals (IRB < 0.097) and nonteaching hospitals in the combined or condition-specific analyses. This study's strengths were its use of a validated risk-adjustment tool with the administrative data and the availability of baseline patient surveys to control for the patients’ physical and cognitive functioning before being hospitalized.
Five single-state studies using administrative data reported no differences in short-term mortality rates by hospital teaching status for pneumonia, myocardial infarction, acquired immunodeficiency syndrome (AIDS), cholecystectomy, and vascular surgical procedures. Each of these studies analyzed teaching status as one of several hospital characteristics in multivariable models. By controlling for other factors—such as technological resources, characteristics of physicians, and volume of patients—that could be the cause of differences in outcomes in teaching and nonteaching hospitals, these studies may have obscured differences in outcomes by teaching status.
A study of Medicare claims data for 21,194 elderly patients hospitalized with community-acquired pneumonia in Pennsylvania during 1990 found no differences in adjusted 30-day mortality rates for teaching and nonteaching hospitals (OR: 1.06; 95% CI: 0.96, 1.18), although 90-day mortality rates were somewhat higher in teaching hospitals (OR: 1.12; 95% CI: 1.02, 1.22) (Whittle et al. 1998). In unadjusted or adjusted analyses of 373 California hospitals during 1992, teaching status was not significantly associated with in-hospital mortality rates for AMI patients, but no adjustment was made for patients’ severity of illness (Schultz et al. 1999). Similarly, a study of in-hospital mortality rates of 7,901 patients treated for AIDS in 333 California hospitals during 1994 found essentially identical adjusted mortality rates for COTH and other hospitals (OR: 1.0; 95% CI: 0.8, 1.2) (Cunningham et al. 1999).
A study of surgical care in North Carolina during 1995 used discharge abstracts to identify postoperative complications, including in-hospital deaths, for six types of surgical procedures (Sloan, Conover, and Provenzale 2000). The 81 nonteaching hospitals did not differ statistically from the three major teaching hospitals (primary affiliates of a medical school) in their postoperative complication rates after laparoscopic cholecystectomy (OR: 1.46; 95% CI: 0.76, 2.82) or open cholecystectomy (OR: 1.05; 95% CI: 0.60, 1.83), but they did have significantly higher adjusted rates of in-hospital complications following stomach operations (OR: 3.38; 95% CI: 1.19, 9.64), intestinal operations (OR: 2.73; 95% CI: 1.82, 4.08), hysterectomies (OR: 3.69; 95% CI: 2.54, 5.37), and hip replacements (OR: 4.58; 95% CI: 3.04, 6.63). This study relied on administrative data rather than medical records to identify postoperative complications, which may have biased the results in favor of hospitals with less complete reporting practices, as the study's authors acknowledged.
A similar study analyzed all hospital admissions in Florida from 1992 through 1996 for three vascular surgical procedures, including 31,172 lower-extremity bypass graft procedures, 45,744 carotid endarterectomies, and 13,415 abdominal aortic aneurysm repairs (Pearce et al. 1999). Adjusting for the patients’ demographic characteristics, presence of diabetes mellitus, and volume of hospitals and surgeons, this study found no significant adjusted differences between patients in teaching and nonteaching hospitals (defined by American Hospital Association data) in a composite outcome of in-hospital deaths, myocardial infarctions, or cerebrovascular accidents for any of the three vascular procedures. In this study of surgical outcomes in Florida and in the study discussed above in North Carolina, the volume of patients in hospitals or among surgeons was associated with significantly better outcomes, but the studies did not define the relation of teaching status to the volume of patients treated in hospitals or by surgeons.
Only one study has assessed quality of care as reported by patients. In a survey of 16,501 women after labor and delivery at 18 hospitals in northeastern Ohio from 1992 through 1994 (Finkelstein et al. 1998), women rated their experiences with physician care, nursing care, provision of information, and discharge preparation in an overall assessment of their hospital care. Based on administrative data to adjust for the patients’ demographic characteristics, health insurance, type of delivery, and clustering within the hospital, the ratings of teaching and nonteaching hospitals did not differ on any of the four dimensions of care or on the 100-point global assessment scale (difference of 0.3 points, 95% CI: −13.8, 14.4).
Limitations of Earlier Studies and Directions for Future Research
Our review of 20 studies of hospital teaching status and quality of care identified five limitations that provide direction for future research. First, only two studies included both validated measures of processes of care and clinical outcomes in unified analyses, with each demonstrating that major teaching hospitals offered much better care than did nonteaching hospitals (Allison et al. 2000; Keeler et al. 1992). Such studies are typically the most compelling evaluations of quality of care because they provide information about specific processes that can result in better outcomes.
Second, 11 of the 20 studies relied on administrative data such as hospital discharge abstracts or Medicare claims to adjust for severity of illness. Because of the uneven recording of diagnoses and their imprecise relation to severity, these administrative data are not the best data for risk adjustment. More detailed clinical data from medical records, including vital signs, laboratory data, and physical findings, result in more accurate comparisons of quality. Those studies that used such detailed data usually found more severe illness in patients in teaching hospitals (Allison et al. 2000; Ayanian et al. 1998; Keeler et al. 1992; Zimmerman et al. 1993), although not always (Rosenthal et al. 1997). Seven of the nine studies that adjusted for severity of illness using clinical data from medical records found that on some measures, teaching hospitals, particularly major teaching hospitals, offered significantly better care. The two studies that did not find better care in teaching hospitals were interested exclusively in pediatric or neonatal intensive care.
Third, the relation of hospital teaching status to volume and other hospital characteristics has not been explored adequately. From a policy perspective, it may be useful to know whether better care and outcomes in teaching hospitals are due to a higher volume of cases, more advanced technology, the expanded role of specialists, or the greater availability of resident physicians for a more timely assessment of severely ill patients. Although two studies controlled for volume and other hospital characteristics (Pearce et al. 1999; Sloan, Conover, and Provenzale 2000), it would be helpful to know the results before and after adjusting for these variables to determine whether they may be the causes of observed differences in quality by teaching status. Understanding the relation of organizational factors to quality of care in teaching and nonteaching hospitals could help guide efforts to improve quality in both types of hospitals.
Fourth, because all the studies we reviewed were based on observational data, they may have unrecognized confounding or selection bias. For example, teaching hospitals may appear to have worse outcomes if less educated or less affluent patients seek care there. Likewise, nonteaching hospitals may appear to have worse outcomes if patients who want less aggressive care go to be treated at nonteaching hospitals. Teaching and nonteaching hospitals may also differ in the proportion of their patients admitted from nursing homes, although at least one study did not find such a difference (Ayanian et al. 1998). Future studies of outcomes according to hospital teaching status therefore might use more refined statistical methods, such as propensity scores or instrumental variables, to address confounding and selection bias in observational data (Landrum and Ayanian, 2002). Comparative studies of teaching and nonteaching hospitals in the Veterans Administration system (which were not analyzed in the studies we reviewed) might also minimize selection bias because eligible patients are usually treated for common conditions at the nearest hospital.
Finally, the potential implications of recording bias should be considered when comparing the quality of care in teaching and nonteaching hospitals. Teaching hospitals may record clinical data more thoroughly in medical records and discharge abstracts because a greater number of physicians—including interns, residents, fellows, and attending physicians—evaluate patients and write clinical notes. Thus, their patients may appear sicker, and their risk-adjusted outcomes better, compared with patients in other hospitals that keep less complete records. Some process measures of quality based on history taking and physical exams also may reflect recording differences between hospitals, biasing these measures toward better quality in teaching hospitals. Conversely, more complete recording practices may detect a higher number of adverse events or complications, resulting in an impression of lower-quality care in teaching hospitals.
Conclusions
In summary, the largest and most rigorous studies that evaluated the quality of care in teaching and nonteaching hospitals found that for common conditions, particularly in elderly patients, major teaching hospitals generally offer better care than do nonteaching hospitals. Other teaching hospitals and nonteaching hospitals showed a smaller or no difference in quality of care. Several studies that analyzed process measures of care found differences in quality between major teaching hospitals and nonteaching hospitals (Allison et al. 2000; Ayanian et al. 1998; Keeler et al. 1992; Kuhn et al. 1994), as did studies that assessed risk-adjusted mortality rates using detailed clinical data (Allison et al. 2000; Keeler et al. 1992; Rosenthal et al. 1997; Zimmerman et al. 1993) or administrative data (Hartz et al. 1989; Kuhn et al. 1991; Taylor, Whellan, and Sloan 1999). These differences in mortality rates have been reported in combined analyses of multiple conditions but have not always been evident for individual conditions, perhaps due to insufficient statistical power (Rosenthal et al. 1997; Taylor, Whellan, and Sloan 1999). Several studies analyzed mortality rates for specific conditions based on administrative data in individual states and found no differences in short-term mortality rates in teaching and nonteaching hospitals (Cunningham et al. 1999; Schultz et al. 1999; Whittle et al. 1998). Only a few studies reported worse care in teaching than in nonteaching hospitals on some specific dimensions of care, such as nursing care, pediatric intensive care, or surgical complications (Ayanian et al. 1998; Pollack et al. 1994; Sloan, Conover, and Provenzale 2000).
Our review identified several gaps in the literature on the quality of care in teaching and nonteaching hospitals. Few studies examined obstetric, neonatal, or pediatric care or interpersonal aspects of care. We also found no relevant studies of ambulatory care in teaching and nonteaching hospitals, despite its growing importance, and we found no studies comparing functional outcomes or health-related quality of life of patients treated in teaching and nonteaching hospitals.
Based on our review, the balance of evidence from the most rigorous studies demonstrated a moderately to substantially better overall quality of care in major teaching hospitals than in nonteaching hospitals, but this finding varied with the particular condition. The reasons that major teaching hospitals provide better care and outcomes have not yet been determined. By comparing the costs and quality of care provided in major teaching hospitals with those in other hospitals, policymakers, health care purchasers, insurers, and patients can make better-informed decisions about the best hospitals for patients with a range of medical and surgical conditions. Providing financial support through higher clinical payments for better care will help ensure that the major teaching hospitals maintain and invest in their distinctive social missions of education, research, clinical innovation, and caring for disadvantaged patients.
Acknowledgments
This article was supported by the Commonwealth Fund Task Force on Academic Health Centers. The authors are grateful to Claire-Marie Bender for research assistance with this literature review.
References
- Allison JJ, Kiefe CI, Weissman NW, Person SD, Rousculp M, Canto JG, Bae S, Williams OD, Farmer R, Centor RM. Relationship of Hospital Teaching Status with Quality of Care and Mortality for Medicare Patients with Acute MI. Journal of the American Medical Association. 2000;284:1256–62. doi: 10.1001/jama.284.10.1256. [DOI] [PubMed] [Google Scholar]
- Association of American Medical Colleges. Meeting the Needs of Communities: How Medical Schools and Teaching Hospitals Ensure Access to Clinical Services. Washington, D.C.: 1998. [Google Scholar]
- Ayanian JZ, Weissman JS, Chasan-Taber S, Epstein AM. Quality of Care for Two Common Illnesses in Teaching and Nonteaching Hospitals. Health Affairs. 1998;17:194–205. doi: 10.1377/hlthaff.17.6.194. [DOI] [PubMed] [Google Scholar]
- Blumenthal D, Meyer GS. Academic Health Centers in a Changing Environment. Health Affairs. 1996;15:200–15. doi: 10.1377/hlthaff.15.2.200. [DOI] [PubMed] [Google Scholar]
- Blumenthal D, Weissman JS. Selling Teaching Hospitals to Investor-Owned Hospital Chains: Three Case Studies. Health Affairs. 2000;19:158–66. doi: 10.1377/hlthaff.19.2.158. [DOI] [PubMed] [Google Scholar]
- Blumenthal D, Weissman JS, Campbell EG. The Social Missions of Academic Health Centers. New England Journal of Medicine. 1997;337:1550–3. doi: 10.1056/NEJM199711203372113. [DOI] [PubMed] [Google Scholar]
- Blumenthal D, Weissman JS, Griner PF. Academic Health Centers on the Front Lines: Survival Strategies in Highly Competitive Markets. Academic Medicine. 1999;74:1038–49. doi: 10.1097/00001888-199909000-00021. [DOI] [PubMed] [Google Scholar]
- Boscarino JA. The Public's Perception of Quality Hospitals II: Implications for Patient Surveys. Hospital and Health Services Administration. 1992;37:13–35. [PubMed] [Google Scholar]
- Brennan TA, Hebert LE, Laird NM, Lawthers A, Thorpe KE, Leape LL, Localio AR, Lipsitz SR, Newhouse JP, Weiler PC, Hiatt HH. Hospital Characteristics Associated with Adverse Events and Substandard Care. Journal of the American Medical Association. 1991;265:3265–9. [PubMed] [Google Scholar]
- Commonwealth Fund Task Force on Academic Health Centers. Health Care at the Cutting Edge: The Role of Academic Health Centers in the Provision of Specialty Care. Boston: 2000. [Google Scholar]
- Cunningham WE, Tisnado DM, Lui HH, Nakazono TT, Carlisle DM. The Effect of Hospital Experience on Mortality among Patients Hospitalized with Acquired Immunodeficiency Syndrome in California. American Journal of Medicine. 1999;107:137–43. doi: 10.1016/s0002-9343(99)00195-3. [DOI] [PubMed] [Google Scholar]
- Donabedian A. Evaluating the Quality of Medical Care. Milbank Quarterly. 1966;44:166–206. [PubMed] [Google Scholar]
- Finkelstein BS, Singh J, Silvers JB, Newhauser D, Rosenthal GE. Patient and Hospital Characteristics Associated with Patient Assessments of Hospital Obstetrical Care. Medical Care. 1998;36:AS68–78. doi: 10.1097/00005650-199808001-00008. 10.1097/00005650-199808001-00008. [DOI] [PubMed] [Google Scholar]
- Fleming ST, McMahon LF, Jr, DesHarnais SI, Chesney JD, Wroblewski RT. The Measurement of Mortality: A Risk-Adjusted Variable Time Window Approach. Medical Care. 1991;29:815–28. doi: 10.1097/00005650-199109000-00003. [DOI] [PubMed] [Google Scholar]
- Freburger JK, Hurley RE. Academic Health Centers and the Changing Health Care Market. Medical Care Research and Review. 1999;56:277–306. doi: 10.1177/107755879905600302. [DOI] [PubMed] [Google Scholar]
- Guterman S. The Balanced Budget Act of 1997: Will Hospitals Take a Hit on Their PPS Margins. Health Affairs. 1998;17:159–66. doi: 10.1377/hlthaff.17.1.159. [DOI] [PubMed] [Google Scholar]
- Hartz AJ, Krakauer H, Kuhn EM, Young M, Jacobsen SJ, Gay G, Muenz L, Katzoff M, Bailey RC, Rimm AR. Hospital Characteristics and Mortality Rates. New England Journal of Medicine. 1989;321:1720–5. doi: 10.1056/NEJM198912213212506. [DOI] [PubMed] [Google Scholar]
- Horbar JD, Badger GJ, Lewit EM, Rogowski J, Shiono PH. Hospital and Patient Characteristics Associated with Variation in 28-Day Mortality Rates for Very Low Birth Weight Infants. Pediatrics. 1997;99:149–56. doi: 10.1542/peds.99.2.149. [DOI] [PubMed] [Google Scholar]
- Iezzoni LI, Shwartz M, Moskowitz MA, Ash AS, Sawitz E, Burnside S. Illness Severity and Costs of Admissions at Teaching and Nonteaching Hospitals. Journal of the American Medical Association. 1990;264:1426–31. [PubMed] [Google Scholar]
- Iezzoni LI, Ash AS, Shwartz M, Daley J, Hughes JS, Mackiernan YD. Predicting Who Dies Depends on How Severity is Measured: Implications for Evaluating Patient Outcomes. Annals of Internal Medicine. 1995;123:763–70. doi: 10.7326/0003-4819-123-10-199511150-00004. [DOI] [PubMed] [Google Scholar]
- Iezzoni LI, Shwartz M, Ash AS, Hughes JS, Daley J, Mackiernan YD. Severity Measurement Methods and Judging Hospital Death Rates for Pneumonia. Medical Care. 1996;34:11–28. doi: 10.1097/00005650-199601000-00002. [DOI] [PubMed] [Google Scholar]
- Iglehart JK. Support for Academic Medical Centers—Revisiting the 1997 Balanced Budget Act. New England Journal of Medicine. 1999;341:299–304. doi: 10.1056/NEJM199907223410424. 10.1056/NEJM199907223410424. [DOI] [PubMed] [Google Scholar]
- Kahn KL, Rogers WH, Rubenstein LV, Sherwood MJ, Reinsch EJ, Keeler EB, Draper D, Kosekoff J, Brook RH. Measuring Quality of Care with Explicit Process Criteria before and after Implementation of the DRG-Based Prospective Payment System. Journal of the American Medical Association. 1990;264:1969–73. [PubMed] [Google Scholar]
- Keeler EB, Rubenstein LV, Kahn KL, Draper D, Harrison ER, McGinty MJ, Rogers WH, Brook RH. Hospital Characteristics and Quality of Care. Journal of the American Medical Association. 1992;268:1709–14. [PubMed] [Google Scholar]
- Kowalczyk L. Teaching Hospitals: Preference, at a Cost. Managed Care Has Yet to Find Way to Redirect Patients. Boston Globe. 2000:C1–C6. July 18. [Google Scholar]
- Kowalczyk L. HMOs Eyeing Surcharge for High-End Care. Some ’02 Plans Affect Visits to Medical Centers. Boston Globe. 2001:A1. August 28. [Google Scholar]
- Kuhn EM, Hartz AJ, Gottlieb MS, Rimm AA. The Relationship of Hospital Characteristics and the Results of Peer Review in Six Large States. Medical Care. 1991;29:1028–38. doi: 10.1097/00005650-199110000-00008. [DOI] [PubMed] [Google Scholar]
- Kuhn EM, Hartz AJ, Krakauer H, Bailey RC, Rimm AA. The Relationship of Hospital Ownership and Teaching Status to 30- and 180-Day Adjusted Mortality Rates. Medical Care. 1994;32:1098–1108. doi: 10.1097/00005650-199411000-00003. [DOI] [PubMed] [Google Scholar]
- Landrum MB, Ayanian JZ. Causal Effect of Ambulatory Specialty Care on Mortality Following Myocardial Infarction: A Comparison of Propensity Score and Instrumental Variable Analyses. Health Services and Outcomes Research Methodology. 2002 in press. [Google Scholar]
- Levin R, Moy E, Griner PF. Trends in Specialized Surgical Procedures at Teaching and Nonteaching Hospitals. Health Affairs. 2000;19:230–8. doi: 10.1377/hlthaff.19.1.230. [DOI] [PubMed] [Google Scholar]
- Mant J, Hicks N. Detecting Differences in Quality of Care: The Sensitivity of Measures of Process and Outcome in Treating Acute Myocardial Infarction. British Medical Journal. 1995;311:793–6. doi: 10.1136/bmj.311.7008.793. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mechanic R, Coleman K, Dobson A. Teaching Hospital Costs: Implications for Academic Missions in a Competitive Market. Journal of the American Medical Association. 1998;280:1015–9. doi: 10.1001/jama.280.11.1015. [DOI] [PubMed] [Google Scholar]
- Neely SK, McInturff WD. Washington, D.C.: Association of American Medical Colleges; 1998. What Americans Say about the Nation's Medical Schools and Teaching Hospitals. Report on Public Opinion Research, Part II. [DOI] [PubMed] [Google Scholar]
- Pearce WH, Parker MA, Feinglass J, Ujiki M, Manheim LM. The Importance of Surgeon Volume and Training in Outcomes for Vascular Surgical Procedures. Journal of Vascular Surgery. 1999;29:768–78. doi: 10.1016/s0741-5214(99)70202-8. [DOI] [PubMed] [Google Scholar]
- Pollack MM, Cuerdon TT, Patel KM, Ruttimann UE, Getson PR, Levetown M. Impact of Quality-of-Care Factors on Pediatric Intensive Care Unit Mortality. Journal of the American Medical Association. 1994;272:941–6. [PubMed] [Google Scholar]
- Reuter J, Gaskin D. Academic Health Centers in Competitive Markets. Health Affairs. 1997;16:242–52. doi: 10.1377/hlthaff.16.4.242. [DOI] [PubMed] [Google Scholar]
- Rosenthal GE, Harper DL, Quinn LM, Cooper GS. Severity-Adjusted Mortality and Length of Stay in Teaching and Nonteaching Hospitals: Results of a Regional Study. Journal of the American Medical Association. 1997;278:485–90. [PubMed] [Google Scholar]
- Rubenstein LV, Kahn KL, Reinisch EJ, Sherwood MJ, Rogers WH, Kamberg C, Draper D, Brook RH. Changes in Quality of Care for Five Diseases Measured by Implicit Review, 1981 to 1986. Journal of the American Medical Association. 1990;264:1974–9. [PubMed] [Google Scholar]
- Schultz MA, Servellen G, Litwin MS, McLaughlin EJ, Uman GC. Can Hospital Structural and Financial Characteristics Explain Variations in Mortality Caused by Acute Myocardial Infarction. Applied Nursing Research. 1999;12:210–4. doi: 10.1016/s0897-1897(99)80285-7. [DOI] [PubMed] [Google Scholar]
- Sloan FA, Conover CJ, Provenzale D. Hospital Credentialing and Quality of Care. Social Science and Medicine. 2000;50:77–88. doi: 10.1016/s0277-9536(99)00269-5. [DOI] [PubMed] [Google Scholar]
- Taylor DH, Whellan DJ, Sloan FA. Effects of Admission to a Teaching Hospital on the Cost and Quality of Care for Medicare Beneficiaries. New England Journal of Medicine. 1999;340:293–9. doi: 10.1056/NEJM199901283400408. [DOI] [PubMed] [Google Scholar]
- Thomas EJ, Orav EJ, Brennan TA. Hospital Ownership and Preventable Adverse Events. Journal of General Internal Medicine. 2000;15:211–9. doi: 10.1111/j.1525-1497.2000.07003.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- U.S. News & World Report. America's Best Hospitals. 2000. pp. 75–107. July 15.
- Whittle J, Lin CJ, Lave JR, Fine MJ, Delaney KM, Joyce DZ, Young WW, Kapoor WN. Relationship of Provider Characteristics to Outcomes, Process, and Costs of Care for Community-Acquired Pneumonia. Medical Care. 1998;36:977–87. doi: 10.1097/00005650-199807000-00005. [DOI] [PubMed] [Google Scholar]
- Zimmerman JE, Shortell SM, Knaus WA, Roussear DM, Wagner DP, Gillies RR, Draper EA, Devers K. Value and Cost of Teaching Hospitals: A Prospective, Multicenter, Inception Cohort Study. Critical Care Medicine. 1993;21:1432–42. doi: 10.1097/00003246-199310000-00009. [DOI] [PubMed] [Google Scholar]