Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Dec 12.
Published in final edited form as: Curr Opin Pediatr. 2009 Dec;21(6):10.1097/MOP.0b013e3283329937. doi: 10.1097/MOP.0b013e3283329937

The Unintended Consequences of Quality Improvement

Naomi S Bardach 1, Michael D Cabana 1,2,3
PMCID: PMC3861327  NIHMSID: NIHMS202867  PMID: 19773653

Abstract

Purpose of Review

The focus on quality improvement has led to several types of initiatives in pediatric care; however, these programs may lead to potential unintended consequences.

Recent Findings

Unintended consequences of quality improvement programs that have been described are reviewed. Unintended effects on resource utilization include effects on costs, as well as the inability to apply programs across different populations and affect disparities in care. Unintended effects on provider behavior include measurement fixation behavior, as well as ‘crowding out’ behavior, in which gains in quality in one area may simply occur at the expense of quality of care in another area. Patient preferences may not always match specific quality improvement measures. Unintended effects for patients may include decreased patient satisfaction, trust or confidence in their provider.

Summary

Recognition and anticipation of the possible unintended consequences of guideline implementation is a critical step to harnessing all the benefits of quality improvement in practice.

Introduction

Over the last two decades, the focus on quality improvement movement has led to several types of initiatives in pediatric care: internal quality improvement programs; public reporting programs, in which comparative performance information is made publically available, such as the Leapfrog group initiative1 and Medicare's Hospital Compare program in the US;2 and “pay for performance” initiatives in which an external payor rewards providers for quality achievements,3 such as the United Kingdom's pay for performance program, run by the National Health Service.4 Quality improvement programs require defining and choosing a measure of quality, which can be a process measure such as vaccination rates in a population, or an outcome measure (e.g., mortality rates or rates of inpatient central line infections). Many of these programs have been successful in improving clinical practice and patient outcomes.5

However, with the positive effects of quality improvement programs, there may be unintended consequences that may result in increased health disparities, poor management or outcomes of other diseases not part of a specific quality improvement focus, or unnecessarily increased costs. This article reviews different unintended consequences that can occur in the different types of quality improvement programs. An understanding of these potential pitfalls can lead to more efficient and effective implementation of these programs for healthcare systems.

Types of unintended consequences

The basic premise behind quality improvement programs is that measuring performance and providing information about provider performance will change provider behavior either through individual behavior change or through system changes that facilitate individual behavior changes.6 Unintended consequences may occur as a direct result of improved measure performance or as an indirect result of changes to provider or system changes. We have grouped the unintended consequences listed below according to direct and indirect effects on resource utilization, provider behaviors, and patients.

Unintended effects of changes in resource use

Increased costs

Several studies have documented decreased costs of care following the implementation of a clinical practice guideline for surgical or preventive procedures.7-9 However, standardization of care through the use of clinical practice guidelines may lead to more predictable but not necessarily lower costs of care. For example, additional costs of guideline implementation may be due to increased use of medical resources such as medications or provider time, or due to the costs of training, coordination, and human resource and information management in a quality improvement program.10 Performance measurement can be quite costly if it requires careful chart reviews; increased efficiency of measurement can be achieved with comprehensive electronic medical records and computerized physician order entry programs, which can also be quite costly to implement and maintain.11-12 A cost-effectiveness analysis of the simulated implementation of six different clinical practice guidelines suggested that although guidelines can maximize cost-effectiveness for individual patients, this process does not always result in the maximization of cost-effectiveness from a societal perspective.13

Increased health disparities

Although quality improvement programs implemented at a population level are thought to be a method of decreasing disparities in care, improvements in care may exacerbate health care inequalities, rather than close such gaps.14 If the benefits of a quality improvement program depend on individual patient response and access to care, those persons with access to greater resources, social networks or social capital may have greater access to the benefits of the program. Equal distribution of resources for quality improvement throughout the system may lead to unequal performance throughout the population, depending on individual patients’ abilities to access care. Until the effects of a quality improvement program are distributed throughout the entire population, there may initially be increased disparities in health care. For example, Sequist and colleagues analyzed the effect of quality improvement efforts on changes in racial disparities in diabetes care within a multispecialty group practice. Although quality improvement improved some aspects of diabetes care, there were persistent disparities in the use of statins and in achievement of glycemic control.15

Additional resources may not be equally available throughout the system, leading to even greater disparities. In the inpatient setting, it has been shown that hospitals with limited resources for internal quality improvement initiatives perform differently than those with greater resources. A comparison of performance on quality measures in US hospitals with a high percentage of low socio-economic status patients to hospitals with a low percentage of these patients suggests that providers with greater resources to invest in quality improvement were better able to improve their performance, thereby exacerbating health care disparities.16

Unintended effects on provider behavior

A potential effect of quality improvement programs on provider behavior occurs with the use of process measures in quality improvement programs (e.g., the percentage of asthma patients that received an asthma action plan) versus an associated outcome measure (e.g., improved patient understanding regarding their management). In this situation, it is possible that the measurement may be perceived by providers and defining what is “important.”17 As a result, the actions measured in a quality improvement program become more important than what they are supposed to represent. The lack of a valid measure can lead to “measurement fixation” when physicians may be unintentionally encouraged to improve the measure, as opposed to improving the intended goal of the measure.18

Another potential unintended consequence is the stifling of provider innovation.19 The implementation of clinical practice guidelines should decrease inappropriate variation.20 Practice variation, however, may have some utility. Medical practice is dynamic. As procedures or protocols are introduced, physicians try out new methods of care and new efficiencies or innovations may be discovered. As a result, quality improvement programs that measure processes of care rather than outcomes may inhibit process-level innovation.

In a system with a “fixed” amount of time and ability to accomplish tasks during the physician visit, gains in quality in one area may simply occur at the expense of quality of care in another area. Over the last several decades, the scope of services expected from primary care physicians is increasing,21 while the amount of time for the typical office visit has not increased substantially.22 As a result, the time and resources that are used to improve one aspect of practice performance take away time and resources that would have been allocated to guideline adherence in another area. For example, efforts to improve physician counseling regarding injury prevention may simply ‘crowd out’ time in the visit to address other counseling topics such as diet, nutrition, exercise or smoking cessation.

Lastly, there is evidence that pressures from quality improvement programs may lead to providers caring only for patient populations amenable to high performance on quality metrics. Implementation of a public reporting program for cardiac bypass graft surgery mortality rates in New York state led to decreased mortality rates in New York compared to other states during the same time period. However, it was found that New York state surgeons were operating on lower risk patients during that time period and referring higher risk patients at a higher rate to outside clinics.23 In another example, McDonald and Roland described negative provider attitudes towards patients who may adversely affect their performance on ‘pay for performance’ measures.24

Unintended effects on patients

Individual patient preferences about clinical care may not directly match quality improvement performance measures. For example, a US Veteran's Administration hospital with low performance in colorectal cancer screening underwent a chart review to determine individual reasons for unscreened patients; 47% of cases were unscreened secondary to patient preference.25 The pursuit of a performance measure that does not take into account patient preferences may lead to decreased patient satisfaction, as well as decreased provider satisfaction.

In addition, public reporting initiatives provide information to patients which may affect patient-provider relationships,26-27 particularly in a situation in which a patient has limited ability to change providers due either to geographical or insurance constraints. There are examples of positive outcomes if patients and providers collaborate to improve care;28 however, there may be negative consequences for patients who lose trust in their hospitals or practitioners.

Lastly, quality information is not always completely accurate or reliable due to statistical uncertainty (e.g., if only small numbers of patients are available for measurement at a given institution) leading to a decreased ability to distinguish between providers delivering high quality versus low quality of care.29-30 In addition, there are limitations due to data quality problems associated with administrative data, which is often used to determine performance rates.31-33 If patients act on inaccurate or unreliable information, they may not optimize their health outcomes.

Monitoring and Preventing Unintended Consequences

Anticipating the unintended consequences in the planning stages of the quality improvement program planning and then monitoring for these consequences will help prevent poor patient outcomes. Using an asthma quality improvement example we illustrate potential unintended consequences (see Table 1 for summary).

Table 1.

Potential Strategies to Anticipate and Address Different Types of Unintended Consequences

Type of Unintended Consequence Example Strategy
Resource Use General Increased costs to medical system due to direct patient care costs, or due to costs of data collection and information management Anticipate and monitor potential increased costs of guideline implementation, assess cost effectiveness prior to implementation
Consider subsidies to providers for electronic medical records which can facilitate data collection and management

Asthma QI Example Increased cost of collecting and reporting data on prescribing patterns and patient compliance Balance increased cost and provider burden of data collection and information management against savings associated with fewer emergency room visits and hospitalizations.
Consider pay for performance programs to provide resources to providers who will not benefit financially from savings seen by the decreased use of health system resources

Provider Behavior General Decreased attention to areas not subject to measurement due to guideline implementation The measurement indicator defines what is “important” and the actions that are “measurable” become more important that what they are supposed to represent Monitor provider adherence to other guidelines that may be affected by increased attention towards quality measure performance
Ensure that what is measurement indicator closely matches that outcome or action that is desired

Asthma QI Example Increased focus on prescribing asthma medications may lead to decreased attention to non-asthma issues or other asthma topics
Aggressive prescription of daily-inhaled corticosteroids and an increased number of children with intermittent asthma that are prescribed such medicines unnecessarily
Monitoring provider ability to address other aspects of asthma care, as well as adherence to other guidelines
Monitoring actual changes in patient symptoms, health care utilization

Patient Effects General Access to imperfect information, or information that is difficult for a lay person to use for decision-making
Patient preferences may not directly match recommendations from a clinical practice guideline or quality improvement measure.
Ongoing validation of measures, and optimized accessibility to publicly reported information with adequate explanation as to methods used and how to interpret the information provided.
Monitor patient satisfaction, trust, and perceived completeness of the visit. Consider ‘exception reporting’ when measuring provider performance.

Asthma QI Example Misinterpretation of performance measures for asthma care
Decreased satisfaction with asthma care or overall care.
Monitor patient satisfaction, trust, and perceived completeness of asthma care or completeness of the visit.

Asthma is a common focus for pediatric quality of care; there is a high prevalence in the population, there is a good evidence base for preventive measures, and if well controlled, there is preventable morbidity. A commonly used performance measure is the percentage of patients with persistent asthma that are prescribed a daily controller medication.

Effects on resources

Increased prescription of daily controller medications can lead to increased costs of the medications, the cost of provider time to ensure that patients have adequate follow-up and refills, the costs for the treatment of patients who do not administer the medication correctly (no use of a spacer nor rinsing of their mouth after use) and develop oral thrush, and the cost of gathering and managing the data on measure performance. Anticipated decreased costs to the medical system include decreased emergency department (ED) visits and decreased hospital admissions. Improving rates of inhaled corticosteroid use is a cost effective method for improving pediatric asthma outcomes; however, given the relative infrequency of hospitalizations and ED visits, any cost savings may not be apparent or immediate.

Effects on provider behavior

Potential “crowding out” of counseling for other topics during the patient visit may occur. Such topics can include poison and injury prevention, diet and nutrition, or obesity prevention. Even if the intervention focuses on a specific topic for one disease (e.g., asthma), other asthma topics that are not the focus of the intervention may be neglected. To address this issue, a quality improvement intervention should be sensitive to other aspects of asthma care, as well as adherence to other guidelines, such as immunizations or use of asthma action plans.

The effect of “measure fixation,” in which the goal becomes improving rates on the measure rather than improving patient outcomes, may lead to aggressive prescription of daily-inhaled corticosteroids and an increased number of children with intermittent asthma that are prescribed such medicines unnecessarily. To address the issues of “measure fixation”, validation of process measures could also include assessment of outcomes such as daily patient asthma symptoms, emergency department visits and hospitalizations. Alternatively “whole system measures,” can be applied which are designed to assess how well the health care system is functioning as a whole to maximize patient outcomes.34 This conceptual approach avoids the problems of “measure fixation;” however, these measures remain to be validated as representative of whole health care system function.

Disparities could be exacerbated in asthma management either through individual providers’ turning away families who are less likely to be compliant, or through clinics serving a high proportion of patients with more social stressors and fewer resources. Comparison of inhaled corticosteroid prescription performance could be stratified by factors such as patient socio-economic status. To prevent a pay for performance program leading to increased gaps, higher rewards could be given that adjust for provider resource availability.

Effects on patients

Physician interactions with patients may be affected if physicians are aware they are being evaluated regarding the prescription of daily-inhaled corticosteroids. National asthma guidelines suggest the development of a partnership between the parent, child and physician. Patients perceived to be less likely to be compliant may be turned away from care or parents may feel more ‘pressured’ by physicians regarding the importance of daily-inhaled corticosteroids, which may lead to decreased trust and decreased satisfaction regarding the parent-physician visit. To address this issue, a quality improvement program could assess the effect on patient satisfaction, trust, and perceived completeness of the visit, or could allow physicians to exclude patients from the measured group, a process known as exception reporting. Exception criteria could include: patients in whom inhaled corticosteroids are contraindicated, patients who decline their use, or, in the case of influenza vaccinations for asthma patients, if a provider has made multiple attempts to bring the patient in for the vaccine. These criteria must be well defined using reliable data sources and be as objective as possible. Exception reporting has been used in the United Kingdom's pay for performance program and researchers found that most providers had a low rate of exception reporting (6% of patients excluded from measurement) with little variation between providers.35

Summary

In summary, unintended consequences of quality improvement programs can be categorized according to their effects on resource utilization, provider behavior changes, and direct patient effects. This framework provides a lens through which to consider potential unintended and negative consequences when considering implementation of a quality improvement initiative, and suggests methods of monitoring for each category: careful anticipation of changes in the flow of resources and cost-effectiveness analyses of the program at the outset and periodically; close management of provider attitudes and behavior, monitoring of other important health care interventions that are at risk of being “crowded out;” and careful choice of measures that have strong supporting evidence, and monitoring of important patient outcomes, including patient satisfaction, across all populations, stratified to assess for disparities in vulnerable populations. Recognition and anticipation of the possible unintended consequences of guideline implementation is a critical step to harnessing all the benefits of quality improvement in practice.

Acknowledgement

Funded by the National Heart Lung and Blood Institute (HL70771) and the NICHD (HD044331).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.The Leapfrog Group for Patient Safety [August 2009]; http://www.leapfroggroup.org/
  • 2. [August 2009];Hospital Compare--A quality tool provided by Medicare. www.hospitalcompare.hhs.gov/
  • 3.Sisk JE. How are health care organizations using clinical guidelines? Health Aff (Millwood) 1998 Sep-Oct;17(5):91–109. doi: 10.1377/hlthaff.17.5.91. [DOI] [PubMed] [Google Scholar]
  • 4.Doran T, Fullwood C, Gravelle H, et al. Pay-for-Performance Programs in Family Practices in the United Kingdom. N Engl J Med. 2006;355(4):375–384. doi: 10.1056/NEJMsa055505. [DOI] [PubMed] [Google Scholar]
  • 5.Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993 Nov 27;342(8883):1317–1322. doi: 10.1016/0140-6736(93)92244-n. [DOI] [PubMed] [Google Scholar]
  • 6.Institute of Medicine (U.S.) Committee on Quality of Health Care in America. Crossing the quality chasm: a new health system for the 21st century. National Academy Press; Washington D.C.: 2001. [PubMed] [Google Scholar]
  • 7.Park JK, Frim DM, Schwartz MS, et al. The use of clinical practice guidelines (CPGs) to evaluate practice and control costs in ventriculoperitoneal shunt management. Surg Neurol. 1997 Dec;48(6):536–541. doi: 10.1016/s0090-3019(97)00364-9. [DOI] [PubMed] [Google Scholar]
  • 8.O'Brien JA, Jr., Jacobs LM, Pierce D. Clinical practice guidelines and the cost of care. A growing alliance. Int J Technol Assess Health Care. 2000;16(4):1077–1091. doi: 10.1017/s0266462300103137. Autumn. [DOI] [PubMed] [Google Scholar]
  • 9.Pitimana-aree S, Forrest D, Brown G, Anis A, Wang XH, Dodek P. Implementation of a clinical practice guideline for stress ulcer prophylaxis increases appropriateness and decreases cost of care. Intensive Care Med. 1998 Mar;24(3):217–223. doi: 10.1007/s001340050553. [DOI] [PubMed] [Google Scholar]
  • 10.Schneider JE, Peterson NA, Vaughn TE, Mooss EN, Doebbeling BN. Clinical practice guidelines and organizational adaptation: a framework for analyzing economic effects. Int J Technol Assess Health Care. 2006;22(1):58–66. doi: 10.1017/s0266462306050847. Winter. [DOI] [PubMed] [Google Scholar]
  • 11.Miller RH, West C, Brown TM, Sim I, Ganchoff C. The Value Of Electronic Health Records In Solo Or Small Group Practices. Health Aff. 2005;24(5):1127–1137. doi: 10.1377/hlthaff.24.5.1127. [DOI] [PubMed] [Google Scholar]
  • 12.Miller RH, West CE. The Value Of Electronic Health Records In Community Health Centers: Policy Implications. Health Aff. 2007;26(1):206–214. doi: 10.1377/hlthaff.26.1.206. [DOI] [PubMed] [Google Scholar]
  • 13.Granata AV, Hillman AL. Competing practice guidelines: using cost-effectiveness analysis to make optimal decisions. Ann Intern Med. 1998 Jan 1;128(1):56–63. doi: 10.7326/0003-4819-128-1-199801010-00009. [DOI] [PubMed] [Google Scholar]
  • 14.Phelan JC, Link BG, Diez-Roux A, Kawachi I, Levin B. “Fundamental causes” of social inequalities in mortality: a test of the theory. J Health Soc Behav. 2004 Sep;45(3):265–285. doi: 10.1177/002214650404500303. [DOI] [PubMed] [Google Scholar]
  • 15.Sequist TD, Adams A, Zhang F, Ross-Degnan D, Ayanian JZ. Effect of quality improvement on racial disparities in diabetes care. Arch Intern Med. 2006 Mar 27;166(6):675–681. doi: 10.1001/archinte.166.6.675. [DOI] [PubMed] [Google Scholar]
  • 16**.Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety-net and non-safety-net hospitals. JAMA. 2008 May 14;299(18):2180–2187. doi: 10.1001/jama.299.18.2180. [Disparities: A stark illustration of the potential for increased disparities with quality improvement programs, this study demonstrates that United States hospitals with a high proportion of low-income patients have lower baseline quality performance and that over time they have a slower rate of improvement, compared with hospitals with a low proportion of low-income patients.] [DOI] [PubMed] [Google Scholar]
  • 17.Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med. 1999 Oct 7;341(15):1147–1150. doi: 10.1056/NEJM199910073411511. [DOI] [PubMed] [Google Scholar]
  • 18.Smith RB, Cheung R, Owens P, Wilson RM, Simpson L. Medicaid markets and pediatric patient safety in hospitals. Health Serv Res. 2007 Oct;42(5):1981–1998. doi: 10.1111/j.1475-6773.2007.00698.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Linton AL, Peachey DK. Guidelines for medical practice: 1. The reasons why. CMAJ. 1990 Sep 15;143(6):485–490. [PMC free article] [PubMed] [Google Scholar]
  • 20.Field MJ, Lohr KN, Institute of Medicine . Clinical practice guidelines : directions for a new program. National Academy Press; Washington, D.C.: 1990. Committee to Advise the Public Health Service on Clinical Practice G, United States. Dept. of H, Human S. [Google Scholar]
  • 21.St Peter RF, Reed MC, Kemper P, Blumenthal D. Changes in the scope of care provided by primary care physicians. N Engl J Med. 1999 Dec 23;341(26):1980–1985. doi: 10.1056/NEJM199912233412606. [DOI] [PubMed] [Google Scholar]
  • 22.Mechanic D, McAlpine DD, Rosenthal M. Are patients’ office visits with physicians getting shorter? N Engl J Med. 2001 Jan 18;344(3):198–204. doi: 10.1056/NEJM200101183440307. [DOI] [PubMed] [Google Scholar]
  • 23.Omoigui NA, Miller DP, Brown KJ, et al. Outmigration for coronary bypass surgery in an era of public dissemination of clinical outcomes. Circulation. 1996 Jan 1;93(1):27–33. doi: 10.1161/01.cir.93.1.27. [DOI] [PubMed] [Google Scholar]
  • 24*.McDonald R, Roland M. Pay for performance in primary care in England and California: comparison of unintended consequences. Ann Fam Med. 2009 Mar-Apr;7(2):121–127. doi: 10.1370/afm.946. [Provider behavior: a comparison of differences in provider attitudes towards patients in two pay for performance programs with different incentive structures and methods of measurement; limited by small sample size and exploratory methods.] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Walter LC, Davidowitz NP, Heineken PA, Covinsky KE. Pitfalls of Converting Practice Guidelines Into Quality Measures: Lessons Learned From a VA Performance Measure. JAMA. 2004 May 26;291(20):2466–2470. doi: 10.1001/jama.291.20.2466. 2004. [DOI] [PubMed] [Google Scholar]
  • 26.Britto MT, DeVellis RF, Hornung RW, DeFriese GH, Atherton HD, Slap GB. Health care preferences and priorities of adolescents with chronic illnesses. Pediatrics. 2004 Nov;114(5):1272–1280. doi: 10.1542/peds.2003-1134-L. [DOI] [PubMed] [Google Scholar]
  • 27**.Faber M, Bosch M, Wollersheim H, Leatherman S, Grol R. Public reporting in health care: how do consumers use quality-of-care information? A systematic review. Med Care. 2009 Jan;47(1):1–8. doi: 10.1097/MLR.0b013e3181808bb5. [Patient effects: a systematic review demonstrating that if consumers are exposed to publicly reported comarison information, it is associated with a change in behavior; limited by a small number of studies.] [DOI] [PubMed] [Google Scholar]
  • 28.Gawande A. The New Yorker. New York City, New York: Dec 6, 2004. The Bell Curve: What happens when patients find out how good their doctors really are? [Google Scholar]
  • 29.Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA. 2004 Aug 18;292(7):847–851. doi: 10.1001/jama.292.7.847. [DOI] [PubMed] [Google Scholar]
  • 30.Hofer TP, Hayward RA. Identifying poor-quality hospitals. Can hospital mortality rates detect quality problems for medical diagnoses? Med Care. 1996 Aug;34(8):737–753. doi: 10.1097/00005650-199608000-00002. [DOI] [PubMed] [Google Scholar]
  • 31.Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA. 2000 Apr 12;283(14):1866–1874. doi: 10.1001/jama.283.14.1866. [DOI] [PubMed] [Google Scholar]
  • 32.Welke KF, Diggs BS, Karamlou T, Ungerleider RM. Comparison of pediatric cardiac surgical mortality rates from national administrative data to contemporary clinical standards. Ann Thorac Surg. 2009 Jan;87(1):216–222. doi: 10.1016/j.athoracsur.2008.10.032. discussion 222-213. [DOI] [PubMed] [Google Scholar]
  • 33.Scholle SH, Roski J, Dunn DL, et al. Availability of data for measuring physician quality performance. Am J Manag Care. 2009 Jan;15(1):67–72. [PMC free article] [PubMed] [Google Scholar]
  • 34.Martin L, Nelson E, Lloyd R, Nolan T. IHI Innovation Series white paper. Institute for Healthcare Improvement; Cambridge, MA: 2007. Whole System Measures. http://www.ihi.org/IHI/Results/WhitePapers/WholeSystemMeasuresWhitePaper.htm. [Google Scholar]
  • 35*.Doran T, Fullwood C, Reeves D, Gravelle H, Roland M. Exclusion of patients from pay-for-performance targets by English physicians. N Engl J Med. 2008 Jul 17;359(3):274–284. doi: 10.1056/NEJMsa0800310. [Provider behavior: suggests that most providers do not over-use exception reporting in order to improve their performance measurement in a national pay for performance program, but that there are instances of providers who exclude more patients than others and achieve higher performance.] [DOI] [PubMed] [Google Scholar]

RESOURCES