Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Jun 24.
Published in final edited form as: N Engl J Med. 2018 Dec 6;379(23):2193–2195. doi: 10.1056/NEJMp1806260

Toward Precision Policy — The Case of Cardiovascular Care

Rishi K Wadhera 1, Deepak L Bhatt 2
PMCID: PMC6589828  NIHMSID: NIHMS1036362  PMID: 30575456

The U.S. health care system is in the midst of a transition toward delivery of high-value rather than volume-based health care. As part of this shift, policies that offer incentives to physicians and hospitals to deliver better-quality care at lower cost are being implemented nationwide. Cardiovascular conditions and procedures, which are both common and expensive, have frequently been targeted by these efforts. Given that such initiatives were rolled out with little evidence to support their efficacy, it is not surprising that many have failed to improve the quality of care or patient outcomes. Unfortunately, some such efforts have also had unintended consequences.

Pay-for-performance initiatives, for example, were pushed by policymakers as a means to improve care for cardiovascular conditions, among others, despite minimal evidence that they were effective. These programs have not reduced the rates of death due to acute myocardial infarction or heart failure, even though these mortality measures are used to evaluate hospitals’ performance. More broadly, patient care experience and performance on other quality measures have also failed to improve. Instead, these programs have levied disproportionate financial penalties on hospitals and physicians who tend to care for vulnerable populations. This effect has prompted policymakers to ask whether we are really paying for performance or simply accentuating disparities in care.

Concerns have also recently arisen regarding the Hospital Readmission Reduction Program (HRRP). The HRRP financially penalizes hospitals that have higher-than-expected rates of readmission within 30 days after discharge for certain conditions. Soon after the HRRP was implemented, readmission rates nationwide began to fall, and it was assumed that the reason was improved care. A recent study, however, suggested that implementation of HRRP may have been associated with an increase in deaths after discharge for heart failure.1 Again, a policy that intended to improve care may instead have resulted in harm.

The story is no different for public report cards on outcomes for procedures such as percutaneous coronary intervention (PCI). Public-reporting programs in various specialties are now proliferating nationally, driven by the rationale that public disclosure provides an incentive for physicians to improve quality and enables patients to make informed choices. The evidence to date, however, does not support these assumptions. Public report cards for PCI have not improved outcomes, and patients rarely use them. Rather, robust data suggest that report cards lead to risk aversion among physicians treating critically ill patients.

Our experience with pay-for-performance, the HRRP, and public report cards thus teaches us that even well-intentioned policy initiatives, when not carefully tested, can fall short of their objectives and have unintended repercussions. Given that the care of many patients is at stake, why are these policies treated as evolving large-scale experiments? We believe it’s imperative that before policies are implemented widely, rigorous studies be conducted to determine whether they achieve their goals.2

An empirical approach to policy might be achieved in a few ways. “Policy trials” could use a cluster randomization strategy — that is, randomly allocating similar hospitals or outpatient practices in a geographic area either to be exposed to a new policy initiative or to serve as a control group.3 Another, simpler approach would be to assess the impact of a policy initiative on a randomly selected, but representative, sample of hospitals around the United States. Before it was canceled, for example, the mandatory bundled-payment program for cardiac care was to be evaluated in this manner.

In addition to taking an evidence-first approach to policy implementation, we can better target policies toward specific diseases, procedures, or provider groups. Adaptive trial designs could be leveraged to identify and focus on subgroups that benefit most from a policy intervention and simultaneously to identify subgroups that may be harmed.4 These data, in turn, could be used to encourage a more precise — rather than indiscriminate — rollout of policy initiatives.

For example, the HRRP might have been evaluated using a cluster randomized trial with pre-specified disease groups (e.g., acute myocardial infarction, heart failure, and pneumonia). When it was observed that readmission penalties seemed to improve care for patients hospitalized with myocardial infarction but were potentially associated with an increase in post-discharge mortality among patients with heart failure, an adaptive trial could have stopped enrolling patients with heart failure but continued evaluating the other disease groups.

Public reporting of PCI outcomes might have been studied in a similar fashion, with pre-specified clinical subgroups (e.g., non–ST-segment elevation myocardial infarction, ST-segment elevation myocardial infarction, and cardiogenic shock). Once it became evident that physicians exposed to reporting were less likely to perform PCI in critically ill patients with cardiogenic shock — who may stand to gain the most from a high-risk procedure — evaluation of this subgroup could have ceased. If this adaptive approach had been used before policy implementation, initial public reports would certainly have excluded patients with cardiogenic shock.

Why don’t we hold policies that affect the care of innumerable patients to the same evidence-based standards as medications, to ensure before widespread rollout that they are effective and do not cause harm?

The use of trials to evaluate policy is not unprecedented. The Oregon experiment, for example, one of the largest and best-conducted policy trials, examined the effects of expanded Medicaid coverage on measures of health status.5 Policies that aspire to improve care quality and outcomes need to be evaluated and tailored with similar rigor. Though “natural” experiments with health policy such as the Oregon experiment are rare, policymakers could use small-scale, adaptive trials as a pragmatic approach to gather data efficiently, improve and innovate, and rapidly scale up what works or quickly terminate what does not.

Who should lead the effort to study policies before broad rollout? The Center for Medicare and Medicaid Innovation (CMMI), which has broad authority to experiment with health care policies, might be best equipped to do so. Since conducting policy trials is not without challenges,3 particularly in a capricious political climate, CMMI would benefit from the full support of the Department of Health and Human Services. Participation in policy trials would need to be mandatory, given that CMMI’s efforts to generate meaningful data have thus far been limited by voluntary initiatives that may be prone to selection bias and thus not generalizable. Though hospitals and physicians may perceive mandatory participation in policy trials as unfair and cumbersome, modest bonus payments may alleviate some resistance.

Moreover, if we are serious about improving quality of care and patient outcomes — whether mortality, readmissions, or patient-centered outcomes such as freedom from angina or heart failure symptoms — we have a responsibility to ensure that policies are grounded in evidence.2 Surely hospitals and physicians would agree that thorough evaluation before widespread implementation is better than the status quo, in which insufficiently tested initiatives are rolled out nationally. Many of these programs also impose substantial administrative and financial burdens. U.S. physician practices spend more than $15 billion each year to report on quality measures, despite the fact that few of them are validated measures of performance.

In an era when the health care landscape is evolving from a fee-for-service to a value-based payment architecture, we believe that if we are to offer incentives for the delivery of high-value care, it is crucial to shift from idea-based to evidence-based health policy. Rather than using policy initiatives as a blunt tool to drive broad change, we must first ensure that they actually improve patient care and tailor them to maximize benefits and mitigate harm. As we move toward precision medicine and the delivery of diagnostic and therapeutic advances individualized to patients, we can also usher in an era of “precision policy” customized to specific disease processes, procedures, or provider groups and thereby enhance care and value within our health system.

Footnotes

Disclosure forms provided by the authors are available at NEJM.org.

Contributor Information

Rishi K. Wadhera, Brigham and Women’s Hospital Heart and Vascular Center and Harvard Medical School, Boston; Richard and Susan Smith Center for Outcomes Research in Cardiology, Division of Cardiology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston

Deepak L. Bhatt, Brigham and Women’s Hospital Heart and Vascular Center and Harvard Medical School, Boston

References

  • 1.Gupta A, Allen LA, Bhatt DL, et al. Association of the Hospital Readmissions Reduction Program implementation with readmission and mortality outcomes in heart failure. JAMA Cardiol 2018; 3: 44–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Baicker K, Chandra A. Evidence-based health policy. N Engl J Med 2017; 377: 2413–5. [DOI] [PubMed] [Google Scholar]
  • 3.Newhouse JP, Normand S-LT. Health policy trials. N Engl J Med 2017; 376: 2160–7. [DOI] [PubMed] [Google Scholar]
  • 4.Bhatt DL, Mehta C. Adaptive designs for clinical trials. N Engl J Med 2016; 375: 65–74. [DOI] [PubMed] [Google Scholar]
  • 5.Baicker K, Taubman SL, Allen HL, et al. The Oregon experiment — effects of Medicaid on clinical outcomes. N Engl J Med 2013; 368: 1713–22. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES