Abstract
Objective
To examine the effect of public reporting (PR) and financial incentives tied to quality performance on the use of care management practices (CMPs) among small- and medium-sized physician groups.
Data
Survey data from The National Study of Small and Medium-sized Physician Practices were used. Primary data collection was also conducted to assess community-level PR activities. The final sample included 643 practices engaged in quality reporting; about half of these practices were subject to PR.
Study Design
We used a treatment effects model. The instrumental variables were the community-level variables that capture the level of PR activity in each community in which the practices operate.
Findings
(1) PR is associated with increased use of CMPs, but the estimate is not statistically significant; (2) financial incentives are associated with greater use of CMPs; (3) practices' awareness/sensitivity to quality reports is positively related to their use of CMPs; and (4) combined PR and financial incentives jointly affect CMP use to a greater degree than either of these factors alone.
Conclusion
Small- to medium-sized practices appear to respond to PR and financial incentives by greater use of CMPs. Future research needs to investigate the appropriate mix and type of incentive arrangements and quality reporting.
Keywords: Quality improvement, public reporting, physician groups, financial incentives
The need to improve both the quality and the safety of health care in the United States is well documented. Reporting of quality data and pay for performance for physicians have emerged as two of the most widely advocated strategies for accelerating quality improvement (Rhoads, Konety, and Dudley 2009; Rodriguez et al. 2009; Gavagan et al. 2010; Hibbard, Stockard, and Tusler 2003). Quality performance reporting is argued to improve physician quality of care by appealing to physicians' professional ethos and increasing transparency of quality information to relevant peers or external stakeholders (Marshall et al. 2000; Lindenauer et al. 2007). Pay for performance programs are intended to reward quality performance and reverse financial incentives that can deter physician practices from investing in quality improvement efforts (Beaulieu and Horrigan 2005; Cutler et al. 2007; Lindenauer et al. 2007; Young et al. 2007b; Glickman et al. 2009). However, despite the intuitive appeal of quality reporting and pay for performance, and the increasing body of evidence in the hospital sector, little is known about the effects of such efforts in physician practices, as they affect efforts to improve quality of care (i.e., Marshall et al. 2000; Peterson et al. 2006; Rosenthal and Frank 2006; Greene and Nash 2009; Robinson et al. 2009a,b). Although there is broad agreement that providing performance data and feedback to physicians is likely to motivate improvements in care, there is debate around whether such improvements are motivated at the margin by physician performance data that are publicly reported, relative to data reported privately to the physician group and its members (Wagner et al. 2001; Tsai et al. 2005; Werner and Asch 2005; Casalino et al. 2007). A second point of contention are whether physician quality performance data alone are sufficient to motivate changes in care processes, or alternatively, if specific financial incentives must be linked to performance data to have an effect on quality improvement efforts (Shaller et al. 2003; Rosenthal et al. 2004; Fisher 2006; Casalino et al. 2007; Young et al. 2007a).
Thus, two dimensions of quality reporting in physician practices warrant further examination: (1) whether quality information is publicly reported or, alternatively, reported only to the practice and the physicians working in that practice and (2) whether reimbursement and/or financial incentives are tied to the quality metrics reported. In principle, quality reporting may vary independently on these two dimensions such that, for example, a practice may receive reports publicly, but they may not be linked to reimbursement or incentive payments. Alternatively, quality reports may be promulgated only to physician practices, and these reports may or may not be linked to financial payment.
This study examines the question of whether physician practices subject to quality reporting show marginal differences in the use of care management practices (CMPs) if quality reports are made public and if financial incentives are tied to quality performance. CMPs are defined as organized processes implemented by physician groups to systematically improve the quality of care for the patients they serve (Casalino et al. 2003; Shortell et al. 2009). CMPs such as use of patient registries, electronic medical records, physician performance feedback, provider education, and reminders have been associated with significant improvements in provider adherence to guidelines and with significant improvements in patient disease control (Weingarten et al. 2002; Welch et al. 2011). Similarly, organizational interventions that facilitate the structured and regular review of patients showed a positive effect on a variety of quality process measures (Renders et al. 2001; Gilbody et al. 2003).
Because the majority of the existing studies examining these issues were conducted with large physician practice organizations (Casalino et al. 2003; Li et al. 2004; Mehrotra et al. 2007; de Brantes and D'Andrea 2009), it is difficult to generalize the findings from those studies to all physicians and physician practices. Our study extends this line of inquiry by examining the association of public reporting (PR) of physician quality and financial incentives for quality performance in small- and medium-sized physician practices.
Conceptual Framework
Assessing and providing feedback on the performance of physicians is widely regarded as a key factor for improving the quality and efficiency of health care delivery (Marshall et al. 2000; Campion et al. 2011; O'Brien et al. 2012). Although such performance data may be provided by multiple sponsors and in multiple forms, the underlying principle is similar—by making physicians aware of their performance and encouraging or incentivizing improvement in areas that are lacking, the quality of health care delivered will improve. At the physician group level, this means that performance data on physicians will play a key role in motivating the use of practices associated with high-quality care.
However, two related challenges are to determine how best to disseminate performance information and what, if any, incentives must be linked to performance data so that physicians will be likely to use it (Gagliardi et al. 2011; Hafner et al. 2011). Two principal forms of dissemination of physician quality information currently dominate. First, physician quality information may be collected, but disseminated only among the providers and staff within a practice. Alternatively, physician quality data may be collected and published in publicly available reports. Relative to private reporting of quality data, PR of physician quality information assumes that as rational actors in a market, physician practices will be motivated to improve quality because patients will “vote with their feet” and take their business to practices that report higher quality of care, and that the force of public opinion will act as a catalyst for physician practices to improve their care (Hartig and Allison 2007; Barr et al. 2008). However, these arguments may be mitigated by several behavioral issues. Patients have been shown to both underutilize and misunderstand PR on physician quality, public opinion to date has not unified sufficiently to galvanize physician behavior change, and switching behavior may be constrained by provider availability or health plan restrictions (Pierce, Bozic, and Bradford 2007; Parker et al. 2012). These concerns raise the question of whether PR is marginally more effective in motivating CMP use than quality performance reports provided only to practices and their affiliated physicians.
The second challenge focuses on whether reporting of physician performance alone—internally or publicly—is a necessary but insufficient condition for physician practices to change their behavior. It may be the case that, to realize the benefits of reporting physician quality information, positive performance on such reports needs to be explicitly linked to greater financial rewards. Combining PR and financial rewards may provide a greater incentive to improve quality processes given that practices may obtain the “best of both worlds” by receiving direct financial benefits for improving care quality and the potential for increased market share and revenue if the public uses the reports to select their providers.
Finally, differences among physician practices also imply that there is likely to be a systematic selection of certain physician practice types into the different quality reporting and financial arrangements. For instance, health plan incentive arrangements are not typically applied uniformly to all physician practices in a network, and those practices that already have CMPs may be more inclined to aggressively pursue incentive arrangements. Alternatively, health plans may selectively impose the incentives to lower performing practices that have not yet adopted CMPs to improve their performance. As such, CMP adoption and incentive arrangements are potentially endogenous, and the bias can either overstate or understate the effect of the incentives.
Following from the discussion above, we test the following hypotheses: Among those physician practices that are subject to either public or private quality reporting, (1) those practices whose quality performance is publicly reported show greater use of CMPs than those that receive only private reporting; (2) those practices in which financial incentives are tied to quality reporting show greater use of CMPs than those that do not; (3) those practices that are aware of and sensitive to public or private quality reports in their operational routines show greater use of CMPs than those that are not; and (4) practices that are subject to both PR and financial incentives for quality performance show greater use of CMPs relative to practices subject to only one or the other.
Methods
Data
The primary data source for the study was The National Study of Small and Medium-sized Physician Practices (NSSMPP). This population is of particular interest in that most physicians in the United States do not practice in integrated health systems or large medical groups. One third (35.1 percent) of visits to office-based physicians in the United States are to solo practitioners, and 88 percent are to practices with nine or fewer physicians. Widespread CMP implementation in these practices may be particularly challenging relative to larger practices because they have less staff, more time pressures, and fewer resources to support implementation (Rittenhouse et al. 2010, 2011).
The survey was designed to provide cross-sectional information about small- and medium-sized physician practices (defined as those with fewer than 20 practicing physicians) providing care for the chronically ill and located throughout the United States but focused on 14 communities designated as the Aligning Forces for Quality (AF4Q) grantees (AF4Q communities are listed in the Appendix). Specifically, we used the IMS Healthcare Organization Services database to construct a sampling frame consisting of the following practice types: primary care (family physicians, general internists, and general practitioners); single specialty cardiology, endocrinology, or pulmonology; or multispecialty practices with at least 60 percent physicians in these specialties. We then created a stratified random sample, with stratification based on practice size, (1–2, 3–8, 9–12, and 13–19 physicians), each of the specialty types, and location (AF4Q community or remainder of the United States). Appropriate weighting enabled construction of a nationally representative sample of the population of interest.
The NSSMPP survey was conducted via telephone by a professional survey firm from July 2007 through March 2009. The firm interviewed the lead physician or lead administrator of the practice; when this was not possible, the firm interviewed another knowledgeable physician in the practice. The response rate was 67 percent. Given that more than two thirds of the potential respondents have completed the survey, we believe nonresponse bias, if any, is not likely to be a major concern.
Respondents from the 14 AF4Q communities totaled 1,201 practices. However, because the primary aim of the study was to examine the effect of PR relative to nonpublic reporting on CMP use, the final analytic sample included only the 643 practices in AF4Q communities that were subject to either form of quality reporting. In addition, a random sample of physician practices located outside of the AF4Q communities (203 practices) was used to examine whether the AF4Q practices systematically differ from non-AF4Q practices.
Finally, the AF4Q evaluation team conducted primary data collection to assess community-level PR activities in each of the 14 AF4Q communities (Christianson et al. 2010); these included the number, years of operation, and contents of available PR that provide comparative quality information on physicians or physician practices during the period in which the physician survey was fielded. The collected information was current as of 2009. PR is not the only source of physician quality information. To capture the prevalence of physician quality information that are not truly public in nature—that is, reports issued by health plans and made available only to their enrollees—we estimated the percentage of the area population who were enrolled in health plans that issue such reports to their members. This was estimated from the InterStudy data on health plan enrollment and the Census estimate of population size in each community. We used these data to construct the instrumental variables (IVs) used for the treatment effects regression model analysis, as described below.
Variables
The main outcome variable is an index reflecting the level of CMP use by a physician practice. This measure is based on the Physician Organization Care Management Index (POCMI) developed by Casalino et al. (2003). The POCMI contains many of the same or similar indicators used by NCQA or Patient-Centered Medical Home certification programs. POCMI is a summary measure that consists of the following five “domains:” case management (i.e., reminders for preventative care, use of nurse care managers), physician feedback, disease registry, clinical practice guidelines, and self-management skills (i.e., use of nonphysician staff educators). Under each domain, physician practices get a “point” for using a CMP for patients with the following chronic conditions: asthma, congestive heart failure, depression, and diabetes. For example, if a physician practice used a disease registry only for its diabetes patients, it would receive only one point. For this analysis, we count those CMPs that are provided by larger physician organizations, such as Independent Physician Associations (IPAs) and Physician Hospital Organizations (PHOs), toward the practices' POCMI measures, even if they are not directly operated by the practices themselves. This is because our sample is limited to small- and medium-sized physician practices that typically lack the resources to implement all CMPs on their own. For these practices, it is likely that the only feasible way of implementing some CMPs—for example, use of nurse care managers for patients with chronic illnesses—for their patients is through the support of larger external entities with which they are affiliated. The maximum possible POCMI value is 24: eight points under the “case management” domain and four points for each of the other four domains.
The POCMI measure makes no assumptions about the relationship among these processes, the order in which they are adopted and implemented, or the relative importance (weight) of the process to quality of care. Because it is designed to capture only the number of care management processes in use in a practice, not a latent construct, there is no reason to expect that the constituent items in the index will cohere in a manner that would result, for example, in factors corresponding to the five designated domains of care management. Currently, there is no existing literature suggesting that these assumptions are not valid, and to our knowledge no alternative measure exists.
As our main explanatory variables, obtained from the NSSMPP survey, include the following: (1) a binary indicator variable for whether the quality performance of the practice is publicly reported; (2) a binary indicator variable for whether the practice received additional revenue based on its quality performance during the past year; and (3) a binary indicator variable to capture whether the practice is aware of and/or sensitive to quality reports. We consider a practice to be aware of and sensitive to quality reports if the respondent reported that such reports are discussed at its physician meetings or that its patients discuss PR on quality and/or patient satisfaction with its physicians.
Because the survey relies on self-report and personal recollection of the respondent to capture specific information about the past and current financial arrangements, asking for exact numbers (e.g., percent of practice revenue from bonus income in the previous year) is likely to be subject to inaccuracies and recall error. Use of binary variables, on the other hand, presumably reduces such noise in the data.
To capture the level of community-level PR activity in each of the 14 AF4Q communities, we use the following variables obtained via the AF4Q evaluation team's primary data collection efforts: (1) whether the publicly available physician quality reports contain comparative information on individual physicians (as compared with the communities that have either no PR or have reports that do not contain comparative information at individual-physician level); (2) length of PR (i.e., the number of years the reports have been available in each AF4Q community) entered as a binary variable that equals 1 if the community has been reporting for more than 4 years and zero otherwise; and (3) the percent of population in the community who are enrolled in private health plans that issue their own physician quality reports. While not truly “public” in nature, these health plan–issued physician quality reports presumably have the same function as the community-level PR in incentivizing the physician practices within networks to improve quality.
Other covariates include practice size, ownership type, percentage of physicians who are primary care physicians, as well as the percentage of the practice's patients who are minority, payer mix, and an indicator variable for whether the practice is a specialist-only practice (i.e., practices that are solely comprised of cardiologists, endocrinologists, or pulmonologists).
Analysis
As discussed above, the most serious source of potential bias stems from the possibility that those practices that are subject to PR and financial incentives may be systematically different from those that are not. To reduce this endogeneity bias, we use treatment effects model (Maddala 1983) estimated via maximum likelihood estimation. The treatment effects model considers the effect of an endogenous binary explanatory variable on the continuous dependent variable, conditional on two sets of independent variables. The model is identified by IVs. The IVs are the community-level variables that capture the level of PR activity in each community in which the practices operate. The rationale is that the physician practice's probability of being subject to PR and financial incentives tied to quality performance are determined by these market-level factors, but these factors do not affect the practices's use of CMPs directly.
The validity of these community-level PR variables as our IVs depends on two assumptions. First physician practices do not directly influence the market-level PR activities in each market. Second, there is no unobserved factor that affects both the market-level PR activities and PR of individual practice's performance. The first assumption is reasonable in this context because physician practices typically have no incentive to voluntarily reveal their quality metrics to the public; in most cases, they are simply subjected to PR requirements as mandated either by public policy (e.g., California's Department of Managed Health Care's requirement to report quality metrics) or by payers. Moreover, small- to medium-sized physician practices typically lack the resources and the initiative to coordinate across one another to collectively influence the market-level PR activities in their communities.
The second assumption may potentially be violated if there is a strong consumer demand for the physician quality information in certain communities that in turn influences both the market-level PR activities and the practices' PR of their quality metrics. Existing literature suggests that consumers are, generally speaking, interested in and desire information on quality of their health care providers (Harris and Buntin 2008). At the same time, the most trusted sources of provider of quality information for consumers are their own health care providers, friends, and relatives (Alexander et al. 2011). Thus, consumers are more likely to turn to their most trusted informal sources rather than to seek and demand PR for such information. Therefore, it is unlikely that there is an unobserved consumer demand that is confounding the relationship between the market-level PR activities and individual practices' PR of their quality metrics.
In addition, we estimate a separate model to test whether there is a stronger effect of using both PR and financial incentives jointly on practice's use of CMPs. The key independent variable for this model is a binary variable that equals 1 if the practice is subject to both PR and financial incentives and zero if it is subject to only one of the two. Those practices that are not subject to either are removed from the sample (about 19.5 percent of the analytic sample). Thus, in this subsample analysis, we hypothesize that those practices that are subject to both PR and the financial incentives have higher POCMI scores than those that are subject to only one or the other.
Results
Table 1 shows the descriptive statistics of the variables used in this analysis. The average POCMI in our final sample is about 8.6 of the maximum value of 24. The availability of PR on physician quality was still limited at the time of the survey, with 9 of the 14 communities having no report while only 2 of the communities had such reports for more than 4 years. The exposure to health plan–issued physician quality reports remained at, on average, 28 percent of each community population. Table 1 also shows that, consistent with the prior expectation, those practices subject to financial and/or PR and are aware/sensitive to quality reporting have higher POCMI scores.
Table 1.
Practice-Level Variables (N = 643) | |||
---|---|---|---|
Physician organization care management index | 8.58 (6.32) | ||
Practice incentives tied to quality reports | % Yes | Yes* | No* |
Practice quality is publicly reported | 49.10% | 9.7, 9 (4–14) | 7.5, 6 (3–11) |
Receives income from performance | 65.10% | 9.0, 8 (4–14) | 7.5, 6 (3–11) |
Aware of/sensitive to quality reports | 18.20% | 12.2, 12 (8–17) | 7.7, 7 (3–12) |
Joint effect of PR and income incentives | |||
Subject to neither PR nor income | 19.50% | ||
Subject to PR incentive only | 15.40% | ||
Subject to income incentive only | 31.00% | ||
Subject to both PR and income | 34.10% | ||
Physician practice characteristics | |||
No. of physicians in the practice | 3.6 (3.45) | ||
Specialist-only practice | 8.60% | ||
Practice ownership (omitted: owned by physicians) | |||
Owned by larger medical group | 5.00% | ||
Owned by hospital | 14.60% | ||
Owned by nonphysician managers | 0.30% | ||
Owned by other entities | 3.30% | ||
Patient characteristics | |||
% Patients with limited English | 6.17 (11.53) | ||
% Patients who are black | 13.27 (18.15) | ||
% Patients who are Hispanic | 5.88 (8.30) | ||
Revenue source (omitted: % revenue from commercial) | |||
% Revenue: Medicare | 31.3 (16.81) | ||
% Revenue: Medicaid | 10.54 (12.89) | ||
% Revenue: Other insurance | 4.07 (5.41) | ||
% Revenue: Self-pay low Inc. | 3.99 (4.83) | ||
% Revenue: Self-pay high Inc. | 2.52 (3.7) | ||
% Revenue: Other | 1.17 (7.18) |
Community-Level Variables (N = 14) | |
---|---|
Community-level PR activity | |
%Pop in private HP with quality reports | 28.12 (21.33) |
Length of physician PR >4 years | 14.3% (2 of 14) |
PR content (omitted: no physician PR available) | |
PR contain info at individual-physician level | 21.4% (3 of 14) |
PR do not contain at individual-physician level | 14.3% (2 of 14) |
Note: Mean and standard deviation shown in parentheses for continuous variables. PR, public reporting.
Mean, median, and interquartile range (in parentheses) of corresponding POCMI values.
As described above, we compared our final sample (i.e., practices located in the AF4Q communities) to the national comparison sample consisting of non-AF4Q practices and found that the AF4Q practices were less likely to be specialist-only practices (9 percent vs 41 percent); had fewer Hispanic patients (6 percent vs 13 percent); and were more likely to receive income based on performance (65 percent vs 40 percent). However, there was only a negligible difference in terms of POCMI measures (8.6 vs 7.6). See Appendix for the full comparison between the national comparison sample and the AF4Q communities.
Tables 2 and † show the results of the treatment effects model estimations. Table 2 suggests that controlling for a variety of practice and patient cohort characteristics, those practices that receive additional income from quality performance have POCMI scores that are, on average, about 7 points higher than those that do not receive such additional income (column 2). It also suggests that, on average, those practices that are aware of and/or sensitive to quality reports have POCMI scores that are about 17 points higher than those that are not (column 3). On the other hand, PR appears to have no significant, marginal effect on practices' use of CMPs (column 1). It is important to note, however, that the point estimate for PR suggests a relatively large effect size (about 8 points higher compared with practices whose quality data are not publicly reported).
Table 2.
POCMI | (1) | (2) | (3) |
---|---|---|---|
Practice incentives tied to quality reports | |||
Practice quality is publically reported | 8.039 (12.341) | ||
Receives income from performance | 6.747 (2.049)*** | ||
Aware of/sensitive to quality reports | 16.595 (1.453)*** | ||
Physician practice characteristics | |||
No. of physicians in the practice | −0.021 (0.103) | −0.023 (0.108) | −0.052 (0.103) |
Specialist-only practice | −3.393 (1.539)** | −1.57 (1.227) | −2.043 (1.412) |
Practice ownership† | |||
Owned by larger medical group | −0.057 (1.373) | −0.06 (1.484) | 0.267 (1.403) |
Owned by hospital | −1.075 (1.806) | −0.033 (0.745) | −0.994 (0.822) |
Owned by nonphysician managers | −6.105 (1.356)*** | −5.025 (1.127)*** | −9.565 (2.143)*** |
Owned by other entities | −0.605 (2.9) | −0.494 (1.372) | 1.384 (1.667) |
Patient characteristics | |||
% Patients with limited English | −0.011 (0.057) | 0.036 (0.033) | −0.055 (0.021)*** |
% Patients who are black | 0.031 (0.01)*** | 0.028 (0.015)* | 0.028 (0.014)** |
% Patients who are Hispanic | 0.062 (0.055) | 0.074 (0.047) | 0.035 (0.03) |
Revenue source‡ | |||
% Revenue: Medicare | 0.032 (0.031) | 0.025 (0.018) | |
% Revenue: Medicaid | 0.037 (0.029) | 0.035 (0.021)* | 0.001 (0.027) |
% Revenue: Other insurance | 0.152 (0.069)** | 0.136 (0.05)*** | 0.12 (0.053)** |
% Revenue: Self-pay low Inc. | −0.024 (0.11) | −0.127 (0.036)*** | 0.069 (0.052) |
% Revenue: Self-pay high Inc. | −0.066 (0.109) | −0.033 (0.055) | −0.145 (0.078)* |
% Revenue: Other | 0.032 (0.033) | 0.013 (0.047) | −0.007 (0.041) |
Constant | 2.545 (6.642) | 2.486 (1.244)** | 4.445 (0.942)*** |
Omitted: Owned by physicians in practice.
Omitted: % Revenue from commercial payers.
*p < .05, **p < .01, ***p < .001.
Table 3 indicates that practices whose communities make available PR of physician quality reports at the individual-physician level are more likely to be subject to financial incentives tied to quality performance (column 1), more likely to receive additional income from performance (column 2), and more likely to be sensitive and aware of quality reports (column 3). Table 3 also indicates that longer histories of physician quality PR at the community level is consistently associated with lower likelihood of practices being subject to incentives tied to quality reporting. One possible explanation is that the reports that have been in existence for a longer period might be less relevant and less meaningful than the more recently issued reports.
Table 3.
Practice Incentives Tied to Quality Reports | (1) | (2) | (3) |
---|---|---|---|
Public reporting (PR) content† | |||
PR contains quality info at individual-physician level | 0.579 (0.15)*** | 0.605 (0.247)** | 0.639 (0.091)*** |
PR does not contain quality info at individual-physician level | 0.018 (0.141) | −0.057 (0.274) | 0.066 (0.074) |
Community-level PR activity | |||
% Pop in private HP with quality reports | 0.006 (0.009) | 0.006 (0.009) | 0.001 (0.002) |
Length of physician PR >4 years | −0.426 (0.201)** | 0.284 (0.224) | −0.631 (0.129)*** |
Physician characteristics | |||
No. of physicians in the practice | 0.011 (0.017) | 0.03 (0.012)** | 0.017 (0.018) |
Specialist-only practice | −0.046 (0.333) | −0.784 (0.191)*** | −0.397 (0.23)* |
Practice ownership‡ | |||
Owned by larger medical group | 0.079 (0.278) | 0.169 (0.268) | 0.021 (0.307) |
Owned by hospital | 0.247 (0.105)** | −0.124 (0.177) | 0.156 (0.115) |
Owned by nonphysician managers | 0.091 (0.778) | −0.315 (0.689) | 0.624 (0.455) |
Owned by other entities | 0.348 (0.301) | 0.205 (0.267) | −0.273 (0.338) |
Patient characteristics | |||
% Patients with limited English | 0.016 (0.009)* | −0.004 (0.006) | 0.014 (0.003)*** |
% Patients who are black | 0.002 (0.002) | 0.003 (0.006) | 0.003 (0.003) |
% Patients who are Hispanic | −0.007 (0.007) | −0.01 (0.008) | 0.008 (0.004)* |
Revenue source§ | |||
% Revenue: Medicare | −0.005 (0.003) | 0.004 (0.005) | −0.002 (0.003) |
% Revenue: Medicaid | −0.003 (0.005) | 0 (0.006) | 0 (0.004) |
% Revenue: Other insurance | −0.006 (0.009) | −0.006 (0.008) | −0.007 (0.006) |
% Revenue: Self-pay low Inc. | −0.022 (0.006)*** | 0.018 (0.015) | −0.04 (0.012)*** |
% Revenue: Self-pay high Inc. | 0.017 (0.014) | 0.018 (0.022) | 0.042 (0.017)** |
% Revenue: Other | −0.004 (0.009) | 0.004 (0.009) | 0.009 (0.006) |
Constant | −0.15 (0.357) | −0.134 (0.396) | −1.019 (0.201)*** |
Omitted: No physician PR available.
Omitted: Owned by physicians in practice.
Omitted: % Revenue from commercial payers.
*p < .05, **p < .01, ***p < .001.
Table 4 suggests that the POCMI scores of those practices that are subject to both PR and the financial incentives tied to quality performance are, on average, about 10 points higher than those practices that are subject to only one or the other. In other words, there is a significant joint effect of having both PR and financial incentives above and beyond having just one of them. As stated earlier, the presence of community-level PR, that contain individual-physician level information significantly influences the likelihood of a practice being subject to both of the incentives.
Table 4.
Second Stage | First Stage | |
---|---|---|
POCMI | Both PR and Income Incentives | |
Practice incentives tied to quality reports | ||
Both PR and financial incentive | 10.12 (2.456)*** | |
Community-level PR activity | ||
% Pop in private HP with quality reports | 0.003 (0.004) | |
Length of physician PR >4 years | −0.579 (0.088)*** | |
PR contents (omitted: no physician PR available) | ||
PR contain quality info at individual-physician level | 0.67 (0.173)*** | |
PR do not contain at individual-physician level | −0.034 (0.173) | |
Physician characteristics | ||
No. of physicians in the practice | −0.062 (0.118) | 0.005 (0.019) |
Specialist-only practice | −2.482 (2.376) | −0.195 (0.276) |
Practice ownership† | ||
Owned by larger medical group | 0.556 (1.463) | −0.272 (0.261) |
Owned by hospital | −1.192 (0.959) | 0.022 (0.194) |
Owned by nonphysician managers | −9.231 (2.372)*** | 4.467 (0.86)*** |
Owned by other entities | −2.747 (2.31) | 0.527 (0.357) |
Patient characteristics | ||
% Patients with limited English | 0.018 (0.045) | −0.004 (0.007) |
% Patients who are black | 0.021 (0.012)* | 0.004 (0.003) |
% Patients who are Hispanic | 0.055 (0.056) | 0.012 (0.01) |
Revenue source‡ | ||
% Revenue: Medicare | 0.007 (0.023) | 0.001 (0.005) |
% Revenue: Medicaid | 0.043 (0.031) | −0.002 (0.005) |
% Revenue: Other insurance | 0.164 (0.067)** | −0.005 (0.01) |
% Revenue: Self-pay low Inc. | −0.041 (0.068) | −0.013 (0.011) |
% Revenue: Self-pay high Inc. | 0.002 (0.057) | −0.011 (0.012) |
% Revenue: Other | 0.034 (0.042) | −0.008 (0.01) |
Constant | 3.334 (1.382)** | −0.332 (0.386) |
Note: Sample is restricted to those practices that either publicly report physician quality, receive additional income from quality performance, or both (N = 489).
Omitted: Owned by physicians in practice.
Omitted: % Revenue from commercial payers.
*p < .05, **p < .01, ***p < .001.
To test how our estimates may change under alternative models, we estimated three additional models: ordinary least square (OLS), two-stage least square, and treatment effects with no instruments. In general, the estimated incentive effects increase as the model moves from the “naïve” (i.e., OLS) to the more complicated, suggesting that our treatment effects model reduces the endogeneity bias that is causing a downward bias on our estimates. The selected results are shown in the Appendix.
Discussion
Several findings from our study merit particular discussion. First, the incremental effect of PR among practices that are engaged in some form of quality reporting is not statistically significant, even though the estimated impact is large, due to the large standard error around the estimate. This indicates wide variation among physician practices in terms of CMP use in response to PR—some practices respond very sensitively and positively to PR, while others do not. However, only when PR is coupled with financial incentives tied to improved quality performance is there a significant and consistent association between PR and use of CMPs. These findings suggest that as a market-wide strategy, PR alone may not yet be sufficiently robust to elicit consistent changes in physician behaviors across all practices. One explanation might be that such reporting has not had sufficient time to take effect. Compared with hospital quality reporting, reports on physician quality are much less common and are typically more limited in the quality measures reported (McNamara 2006). Other research indicates that there is relatively limited awareness of PR and that considerable work remains in engaging consumers; increasing the number and enhancing the content of PR alone may not be enough to facilitate better-informed decision making on the part of health care consumers (Schneider and Epstein 1998; Marshall et al. 2000; Werner and Asch 2005).
Our results also indicate that, regardless of whether the quality information is publicly reported, those small- to medium-sized practices that receive additional income based on performance are more likely to use CMPs. This may indicate either that physician practices that participate in quality reporting require financial rewards for investing the additional time and effort to change clinical practices, or that practices that have care management processes in place are more likely to produce quality outcomes/measures consistent with pay for performance standards or other incentive programs. These results also give credence to the argument that changes in physician practice behavior require a concomitant change in financial incentives, especially given the inertia that must be overcome in transitioning from long-standing behaviors and the many other pressures that primary care physicians face. It is encouraging, therefore, that the group-level incentives measured in the current study seem to be linked to use of CMPs. It is also interesting to note that the size of financial incentives was, on average, quite small in our sample of physician practices, suggesting that even a small increase in financial incentives for practices may result in greater use of CMPs.
Perhaps the most interesting result in our study was the positive association between our measure of awareness/sensitivity to quality reports and the use of CMPs. This finding may indicate that physician practices need to “internalize” or use quality reporting results in their decision making for it to have an impact on practice behavior and quality improvement. Put another way, reporting data will not in and of itself produce the intended effects of improving quality and motivating physicians to take action to correct quality problems. Results of such reporting must actually be seen as a strategic and/or decision-making tool for it to have such effects. These results also indicate that a better understanding of what goes on inside the “black box” of physician practices in terms of change, decision making, and information processing may be as important to improving quality as the external policy approaches, such as PR of physician quality information or pay for performance. The fact that our results suggest that quality reporting is not being internalized by all physician groups also highlights the potential tension between various organizational goals in physician groups. To the extent that quality reporting and financial incentives matter to physician groups, and the use of such incentives is specifically targeted at increasing physician use of quality improvement techniques and quality outcomes, questions must be raised about the effects of such approaches on other priorities, such as improving value to the consumer, increasing revenue, or increasing organizational adaptability. Clearly, policy makers and physician group leaders must be cognizant of such multiple goals and competing priorities of physicians to avoid unanticipated and undesirable outcomes.
Finally, it is important to keep in mind that our sample consisted of small-to medium-sized practices, and that these practices have frequently been seen as lacking the resources, time, or experience to fully use and sustain the types of CMPs that are the focus of this study (Holmboe et al. 2005; Wolfson et al. 2009). This is borne out in the fact that the average POCMI score for sample practices was only 8.6 of a possible 24 compared with a mean score of 11.1 for a sample of large practices examined in a previous study (Rittenhouse et al. 2010). Thus, we might speculate that the link between PR and practice behavior may be more tenuous for small- to medium-sized practices and that additional hands-on support and resources for quality improvement may be required for these organizations, relative to their larger counterparts. Additional research is required that examines the change process in these practices, including identification of challenges and strategies for overcoming barriers to change.
Several limitations of our study may temper the interpretations made above. First, and perhaps most important, the cross-sectional study design does not allow causal inferences about the relationships between PR, financial incentives, and CMP. Although our theory suggests that PR and financial incentives influence the use of CMPs, technically the reverse may be true. While our empirical method attempts to get around this reverse-causality bias, its validity critically depends on how well the assumptions of the two-stage, IV model were met. Because the treatment effects model is nonlinear, the identification of the model depends on the functional form as well as the IVs chosen. While we believe that the IVs used in our model are theoretically justified and reasonable, there is no clear way of testing the validity of these assumptions. Finally, inclusion of market fixed effects in our model is not feasible because the market-level variables would be perfectly collinear with the market dummy variables.
Second, our analysis considers only general measures of both PR and financial incentives. Data limitations precluded analysis of the specific provisions of PR, the quality of data used in these reports, or whether individual-physician financial incentives were used in conjunction with group-level incentives for quality. Future investigations should clearly examine in more granular fashion the extent to which differences in quality improvement vary as a function of the nature and scope of PR or financial incentives. However, given that variation in CMPs exists, and that our results were obtained using more general measures of financial incentives, PR, and physician practices' use of quality reports, our tests might be considered conservative. Finally, we focused primarily on the role of information/quality reporting and financial incentives as motivation for quality improvement. While such factors may be important determinants of quality improvement, we make no claim that these are the only factors related to quality improvement. Especially important to consider are nonfinancial motivators such as professional pride, peer competition, self-image, and organizational identification (Dukerich, Golden, and Shortell 2002).
Finally, even though we utilize observations from a nationally representative sample of small–medium size practices that provide care to chronically ill patients, our analytic sample itself was not randomly drawn from this larger sample. Our results, therefore, may not be applicable to the original population.
Conclusion
As CMS and other national payers move forward on the PR path, the value of developing or maintaining community-based PR of physician performance is likely to be questioned. In particular, health plans may find fewer reasons to continue their financial support for these local efforts, and some physicians may view the submission of data for development of local PR as an unnecessary expense if they see these reports as duplicative of CMS reporting efforts. The case for community-level reports then could rest, to a much greater extent than it does now, on the ability of the sponsoring organizations to innovate in performance measurement and report dissemination, and to engage local stakeholders, both consumers and physicians, in using physician quality data to inform decisions and motivate quality improvement. Future research clearly needs to focus on investigating the appropriate mix and type of incentive arrangements and quality reporting in physician groups.
Acknowledgments
Joint Acknowledgment/Disclosure Statement: Support for this study was provided by the Robert Wood Johnson Foundation and the Aligning Forces for Quality Initiative.
Disclosures: None.
Disclaimers: None.
SUPPORTING INFORMATION
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
Appendix SA2: Sample Characteristic by AF4Q Community.
Appendix SA3: Estimates of Incentive Effects by Model Specification.
Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.
References
- Alexander JA, Hearld L, Hasnain-Wynia R, Christenson J, Martsof G. “Consumer Trust in Sources of Provider Quality Information”. Medical Care Research and Review. 2011;68(4):421–40. doi: 10.1177/1077558710394199. [DOI] [PubMed] [Google Scholar]
- Barr JK, Bernard SL, Sofaer S, Giannotti TE, Lenfestey NF, Miranda DJ. “Physicians' Views on Public Reporting of Hospital Quality Data”. Medical Care Research and Review. 2008;65(6):655–73. doi: 10.1177/1077558708319734. [DOI] [PubMed] [Google Scholar]
- Beaulieu ND, Horrigan DR. “Putting Smart Money to Work for Quality Improvement”. Health Services Research. 2005;40(5 Pt 1):1318–34. doi: 10.1111/j.1475-6773.2005.00414.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Brantes FS, D'Andrea BG. “Physicians Respond to Pay-for-Performance Incentives: Larger Incentives Yield Greater Participation”. American Journal of Managed Care. 2009;15(5):305–10. [PubMed] [Google Scholar]
- Campion FX, Larson LR, Kadlubek PJ, Earle CC, Neuss MN. “Advancing Performance Measurement in Oncology: Quality Oncology Practice Initiative Participation and Quality Outcomes”. Journal of Oncology Practice/American Society of Clinical Oncology. 2011;7(3 suppl):31s–5s. doi: 10.1200/JOP.2011.000313. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Casalino L, Gillies RR, Shortell SM, Schmittdiel JA, Bodenheimer T, Robinson JC, Rundall T, Oswald N, Schauffler H, Wang MC. “External Incentives, Information Technology, and Organized Processes to Improve Health Care Quality for Patients with Chronic Disease”. Journal of the American Medical Association. 2003;289(4):434–41. doi: 10.1001/jama.289.4.434. [DOI] [PubMed] [Google Scholar]
- Casalino L, Alexander GC, Jin L, Konetzka RT. “General Internists' Views on Pay-for-Performance and Public Reporting of Quality Scores: A National Survey”. Health Affairs. 2007;26(2):492–9. doi: 10.1377/hlthaff.26.2.492. [DOI] [PubMed] [Google Scholar]
- Christianson JB, Volmar KM, Alexander J, Scanlon DP. “A Report Card on Provider Report Cards: Current Status of the Health Care Transparency Movement”. Journal of General Internal Medicine. 2010;25(11):1235–41. doi: 10.1007/s11606-010-1438-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cutler TW, Palmieri J, Khalsa M, Stebbins M. “Evaluation of the Relationship between a Chronic Disease Care Management Program and California Pay-for-Performance Diabetes Care Cholesterol Measures in One Medical Group”. Journal of Managed Care Pharmacy. 2007;13(7):578–88. doi: 10.18553/jmcp.2007.13.7.578. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dukerich JM, Golden BR, Shortell SM. “Beauty Is in the Eye of the Beholder: The Impact of Organizational Identification, Identity, and Image on the Cooperative Behaviors of Physicians”. Administrative Science Quarterly. 2002;47(3):507–33. [Google Scholar]
- Fisher ES. “Paying for Performance— Risks and Recommendations”. New England Journal of Medicine. 2006;355(18):1845–7. doi: 10.1056/NEJMp068221. [DOI] [PubMed] [Google Scholar]
- Gagliardi AR, Brouwers MC, Finelli A, Campbell CE, Marlow BA, Silver IL. “Physician Self-Audit: A Scoping Review”. The Journal of Continuing Education in the Health Professions. 2011;31(4):258–64. doi: 10.1002/chp.20138. [DOI] [PubMed] [Google Scholar]
- Gavagan TF, Du H, Saver BG, Adams GJ, Graham DM, McCray R, Goodrick GK. “Effect of Financial Incentives on Improvement in Medical Quality Indicators for Primary Care”. Journal of the American Board of Family Medicine. 2010;23(5):622–31. doi: 10.3122/jabfm.2010.05.070187. [DOI] [PubMed] [Google Scholar]
- Gilbody S, Whitty P, Grimshaw J, Thomas R. “Educational and Organizational Interventions to Improve the Management of Depression in Primary Care: A Systematic Review”. Journal of the American Medical Association. 2003;289(23):3145–51. doi: 10.1001/jama.289.23.3145. [DOI] [PubMed] [Google Scholar]
- Glickman SW, Boulding W, Roos JMT, Staelin R, Peterson ED, Schulman KA. “Alternative Pay-for-Performance Scoring Methods: Implications for Quality Improvement and Patient Outcomes”. Medical Care. 2009;47(10):1062–8. doi: 10.1097/MLR.0b013e3181a7e54c. doi: 10.1097/MLR.0b013e3181a7e54c. [DOI] [PubMed] [Google Scholar]
- Greene S, Nash B. “Pay for Performance: An Overview of the Literature”. American Journal of Medical Quality. 2009;24:140. doi: 10.1177/1062860608326517. [DOI] [PubMed] [Google Scholar]
- Hafner JM, Williams SC, Koss RG, Tschurtz BA, Schmaltz SP, Loeb JM. “The Perceived Impact of Public Reporting Hospital Performance Data: Interviews with Hospital Staff”. International Journal for Quality in Health Care: Journal of the International Society for Quality in Health Care/ISQua. 2011;23(6):697–704. doi: 10.1093/intqhc/mzr056. [DOI] [PubMed] [Google Scholar]
- Harris KM, Buntin MB. “Choosing a Health Care Provider: The Role of Quality Information”. Princeton, NJ: Robert Wood Johnson Foundation; 2008. The Robert Wood Johnson Foundation Research Synthesis Report No. 14. [Google Scholar]
- Hartig JR, Allison J. “Physician Performance Improvement: An Overview of Methodologies”. Clinical and Experimental Rheumatology. 2007;25(6 suppl 47):50–4. [PubMed] [Google Scholar]
- Hibbard JH, Stockard J, Tusler M. “Does Publicizing Hospital Performance Stimulate Quality Improvement Efforts?”. Health Affairs. 2003;22(2):84–94. doi: 10.1377/hlthaff.22.2.84. [DOI] [PubMed] [Google Scholar]
- Holmboe E, Kim N, Cohen S, Curry M, Elwell A, Petrillo M, Meehan T. “Primary Care Physicians, Office-Based Practice and the Meaning of Quality Improvement: A Qualitative Study”. American Journal of Medicine. 2005;118:917–22. doi: 10.1016/j.amjmed.2005.05.015. [DOI] [PubMed] [Google Scholar]
- Li R, Casalino L, Simon J, Schmittdiel J, Bodeneimer T, Shortell SM, Gillies RR. “Organizational Factors Affecting the Adoption of Diabetes Care Management Processes in Physician Organizations”. Diabetes Care. 2004;27(10):2312–6. doi: 10.2337/diacare.27.10.2312. [DOI] [PubMed] [Google Scholar]
- Lindenauer PK, Remus D, Roman S, Rothberg MB, Benjamin EM, Ma A, Bratzler DW. “Public Reporting and Pay for Performance in Hospital Quality Improvement”. New England Journal of Medicine. 2007;356(5):486–96. doi: 10.1056/NEJMsa064964. [DOI] [PubMed] [Google Scholar]
- Maddala GS. Limited-Dependent and Qualitative Variables in Econometrics. Cambridge, England: Cambridge University Press; 1983. [Google Scholar]
- Marshall MN, Shekelle PG, Leatherman S, Brook RH. “The Public Release of Performance Data: What Do We Expect to Gain?”. Journal of the American Medical Association. 2000;283(14):1866–74. doi: 10.1001/jama.283.14.1866. [DOI] [PubMed] [Google Scholar]
- McNamara P. “Provider-Specific Report Cards: A Tool for Health Sector Accountability in Developing Countries”. Health Policy and Planning. 2006;21(2):101–9. doi: 10.1093/heapol/czj009. [DOI] [PubMed] [Google Scholar]
- Mehrotra A, Pearson SD, Coltin KL, Kleinman KP, Singer JA, Rabson B, Schneider EC. “The Response of Physician Groups to P4P Incentives”. American Journal of Managed Care. 2007;13(5):249–55. [PubMed] [Google Scholar]
- O'Brien JM, Corrigan J, Reitzner JB, Moores LK, Metersky M, Hyzy RC, Baumann MH, Tooker J, Rosof BM, Burstin H, Pace K. “Will Performance Measurement Lead to Better Patient Outcomes? What Are the Roles of the National Quality Forum and Medical Specialty Societies?”. Chest. 2012;141(2):300–7. doi: 10.1378/chest.11-1942. [DOI] [PubMed] [Google Scholar]
- Parker C, Schwamm LH, Fonarow GC, Smith EE, Reeves MJ. “Stroke Quality Metrics: Systematic Reviews of the Relationships to Patient-Centered Outcomes and Impact of Public Reporting”. Stroke; A Journal of Cerebral Circulation. 2012;43(1):155–62. doi: 10.1161/STROKEAHA.111.635011. [DOI] [PubMed] [Google Scholar]
- Peterson LA, Woodard LD, Urech T, Daw C, Sookanan S. “Does Pay-for-Performance Improve the Quality of Health Care?”. Annals of Internal Medicine. 2006;145(4):265–72. doi: 10.7326/0003-4819-145-4-200608150-00006. [DOI] [PubMed] [Google Scholar]
- Pierce RG, Bozic KJ, Bradford DS. “Pay for Performance in Orthopaedic Surgery”. Clinical Orthopaedics and Related Research. 2007;457:87–95. doi: 10.1097/BLO.0b013e3180399418. [DOI] [PubMed] [Google Scholar]
- Renders CM, Valk GD, Griffin SJ, Wagner EH, Eijk Van JT, Assendelft WJ. “Interventions to Improve the Management of Diabetes in Primary Care, Outpatient, and Community Settings: A Systematic Review”. Diabetes Care. 2001;24:1821–33. doi: 10.2337/diacare.24.10.1821. [DOI] [PubMed] [Google Scholar]
- Rhoads KF, Konety BM, Dudley RA. “Performance Measurement, Public Reporting, and Pay-for-Performance”. Urologic Clinics of North America. 2009;36(1):37–48. doi: 10.1016/j.ucl.2008.08.003. [DOI] [PubMed] [Google Scholar]
- Rittenhouse DR, Shortell SM, Gillies RR, Casalino LP, Robinson JC, McCurdy RK, Siddique J. “Improving Chronic Illness Care: Findings from a National Study of Care Management Processes in Large Physician Practices”. Medical Care Research and Review. 2010;67((3)):301–20. doi: 10.1177/1077558709353324. [epublished] [DOI] [PubMed] [Google Scholar]
- Rittenhouse D, Casalino L, Shortell S, Alexander JA. “Small and Medium-Size Physician Practices Use Few Patient-Centered Medical Home Processes”. Health Affairs. 2011;30(8):1575–85. doi: 10.1377/hlthaff.2010.1210. [DOI] [PubMed] [Google Scholar]
- Robinson JC, Casalino LP, Gilles RR, Rittenhouse DR, Shortell SS, Fernandes-Taylor S. “Financial Incentives, Quality Improvement Programs, and the Adoption of Clinical Information Technology”. Medical Care. 2009a;47(4):411–7. doi: 10.1097/MLR.0b013e31818d7746. [DOI] [PubMed] [Google Scholar]
- Robinson JC, Shortell SM, Rittenhouse DR, Fernandes-Taylor S, Gillies R, Casalino LP. “Quality-Based Payment for Medical Groups and Individual Physicians”. Inquiry. 2009b;46:172–81. doi: 10.5034/inquiryjrnl_46.02.172. [DOI] [PubMed] [Google Scholar]
- Rodriguez HP, von Glahn T, Elliott MN, Rogers WH, Safran DG. “The Effect of Performance-Based Financial Incentives on Improving Patient Care Experiences: A Statewide Evaluation”. Journal of General Internal Medicine. 2009;24(12):1281–8. doi: 10.1007/s11606-009-1122-6. [epub ahead of print] [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rosenthal MB, Frank RG. “What Is the Empirical Basis for Paying for Quality in Health Care?”. Medical Care Research and Review. 2006;62(2):135–57. doi: 10.1177/1077558705285291. [DOI] [PubMed] [Google Scholar]
- Rosenthal MB, Fernandopulle R, Song HR, Landon B. “Paying for Quality: Providers' Incentives for Quality Improvement”. Health Affairs. 2004;23(2):127–41. doi: 10.1377/hlthaff.23.2.127. [DOI] [PubMed] [Google Scholar]
- Schneider EC, Epstein AM. “Use of Public Performance Reports: A Survey of Patients Undergoing Cardiac Surgery”. Journal of the American Medical Association. 1998;279(20):1638–42. doi: 10.1001/jama.279.20.1638. [DOI] [PubMed] [Google Scholar]
- Shaller D, Sofaer S, Findlay SD, Hibbard JH, Lansky D, Delbanco S. “Consumers and Quality-Driven Health Care: A Call to Action”. Health Affairs. 2003;22(2):95–101. doi: 10.1377/hlthaff.22.2.95. [DOI] [PubMed] [Google Scholar]
- Shortell SM, Gillies R, Siddique J, Casalino LP, Rittenhouse D, Robinson JC, McCurdy RK. “Improving Chronic Illness Care: A Longitudinal Cohort Analysis of Large Physician Organizations”. Medical Care. 2009;47(9):932–9. doi: 10.1097/MLR.0b013e31819a621a. [DOI] [PubMed] [Google Scholar]
- Tsai AC, Morton SC, Mangione CM, Keeler EB. “A Meta-Analysis of Interventions to Improve Care for Chronic Illnesses”. American Journal of Managed Care. 2005;11(8):478–88. [PMC free article] [PubMed] [Google Scholar]
- Wagner EH, Austin BT, Davis C, Hindmarsh M, Schaefer J, Bonomi A. “Improving Chronic Illness Care: Translating Evidence into Action”. Health Affairs. 2001;20(6):64–78. doi: 10.1377/hlthaff.20.6.64. [DOI] [PubMed] [Google Scholar]
- Weingarten S, Henning J, Badamgarav E, Knight K, Hasselblad V, Gano A, Ofman JJ. “Interventions Used in Disease Management Programmes for Patients with Chronic Illness—Which Ones Work? Analysis of Published Reports”. British Medical Journal. 2002;325((7370)):925–42. doi: 10.1136/bmj.325.7370.925. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Welch G, Allen NA, Zagarins SE, Stamp KD, Bursell S-E, Kedziora RJ. “Comprehensive Diabetes Management Program for Poorly Controlled Hispanic Type 2 Patients at a Community Health Center”. The Diabetes Educator. 2011;37(5):680–8. doi: 10.1177/0145721711416257. [DOI] [PubMed] [Google Scholar]
- Werner RM, Asch DA. “The Unintended Consequences of Publicly Reporting Quality Information”. Journal of the American Medical Association. 2005;293(10):1239–44. doi: 10.1001/jama.293.10.1239. [DOI] [PubMed] [Google Scholar]
- Wolfson D, Bernabeo E, Leas B, Sofaer S, Pawlson G, Pillittere D. “Quality Improvement in Small Office Settings: An Examination of Successful Practices”. BMC Family Practice. 2009;10:1. doi: 10.1186/1471-2296-10-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Young G, Meterko M, White B, Bokhour B, Sautter K, Berlowitz D, Burgess J., Jr “Physician Attitudes toward Pay-for-Quality Programs: Perspectives from the Front Line”. Medical Care Research and Review. 2007a;64:331–43. doi: 10.1177/1077558707300091. [DOI] [PubMed] [Google Scholar]
- Young G, Meterko M, Beckman H, Baker E, White B, Sautter K, Greene R, Curtin K, Bokhour B, Berlowitz D, Burgess J., Jr “Effects of Paying Physicians Based on Their Relative Performance for Quality”. Journal of General internal Medicine. 2007b;22:872–6. doi: 10.1007/s11606-007-0185-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.