Skip to main content
Journal of Oncology Practice logoLink to Journal of Oncology Practice
. 2010 Mar;6(2):59–60. doi: 10.1200/JOP.091084

Commentary: Medicare's 2006 Oncology Demonstration Project: Lost in Translation?

Peter B Bach a
PMCID: PMC2835482  PMID: 20592776

In 2005 and 2006, the Centers for Medicare & Medicaid Services (CMS) conducted demonstration projects that relied on the use of the traditional claims system to gather additional information from physicians' offices about care processes and patient outcomes. In 2005, the demonstration project focused on symptom reporting by patients. In 2006, the demonstration project focused on care process and patient status reporting by treating physicians. In both cases, the demonstration projects were vanguards of what has become an increasing emphasis on data collection in physicians' offices in relation to quality measurement and payment rationalization.

Many policymakers believe that more data are needed from clinical encounters. Most prominently, an effort to measure care quality and, in concert, pay physicians differentially on the basis of the quality of care they provide will necessarily require more detail on what care is delivered to which patients. In oncology, basic clinical information, such as disease stage, is not captured in claim forms that are submitted for payment to Medicare and private payers (only the major disease groupings are provided). Neither are patient outcomes of any kind routinely captured and reported, whether the outcomes are clinical (such as disease recurrence or progression) or patient centered (such as symptoms of distress, pain, or other measures of quality of life).

Over the long term, most policymakers believe that this additional information will be captured through electronic health records (EHRs) that will be populated during the course of routine care. Thus, these records will allow for the free movement of detailed clinical information assignable to the responsible physician without creating additional work for clinicians. Through these EHR platforms, there will be seamless determination of quality measures, care processes, and outcomes. But in the near term, EHRs are not widely in place. This makes the key innovation of the 2005 and 2006 demonstration projects their piggybacking on the medical claims transmission process to capture additional clinical data.

In this issue, Doherty et al1 report on their evaluation of the 2006 demonstration project, providing readers an opportunity to consider whether the intended goals of the project were achieved or whether there were serious failings. In my view, much of what they found is worrisome. There are signs that the data submitted were not necessarily what was sought, which in my mind raises some concerns about the viability of programs that depend on physicians entering into data repositories additional clinical information that is valid and accurate for the many purposes envisioned for it.

For instance, Doherty et al1 found that various CMS code descriptors had been, through a back-office version of the telephone game, rewritten and abbreviated to a point at which the original meaning was not preserved. Likewise, physicians often reported that participation in the demonstration elevated their workload and that of office staff. In interviews regarding queries about guideline adherence, physicians had widely varying views on the meaning of guideline adherence as well as different ways of determining if they were actually following guidelines.

I confess that these results on all dimensions surprised me. I did not expect the demonstration designed by CMS—with which I was involved from the point of its conceptualization through this evaluation—to add meaningful work to the practice of office-based oncology. Likewise, I did not anticipate that practicing oncologists would be passive regarding the accuracy of the staging and disease status information they submitted on claims, even though the accuracy of that information was not a criterion for payment. In addition, in policy circles, the notion of guideline adherence is pretty well entrenched, whereas the analysis by Doherty et al1 suggests that this is not the case in practicing physicians' offices.

Perhaps in an ideal world, there would have been no gap between what CMS (and I) expected and what Doherty et al1 actually found. With EHRs, this may all be easier. But what I have learned is that policymakers should be extraordinarily cautious about adding more data collection tasks to a crowded clinical workflow and should also assume that the key objectives of that data collection endeavor will not be treasured as much by the practicing physicians who are asked to gather and submit data as they are by the individuals who create the programs and look forward to receiving those same data.

In this observation, I do not mean to impugn the motives or objectives of either well-meaning policymakers or clinically dedicated practitioners. Rather, I intend to remind myself and others that well-meaning steps toward improving our health care system can get lost in translation from a policymaker's desk to a patient's bedside.

Author's Disclosures of Potential Conflicts of Interest

The author indicated no potential conflicts of interest.

Reference

  • 1.Doherty J, Tanamor M, Feigert J, et al. Oncologists' experience in reporting cancer staging and guideline adherence: Lessons from the 2006 Medicare Oncology Demonstration. J Oncol Pract. 2010;6:56–59. doi: 10.1200/JOP.091083. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Oncology Practice are provided here courtesy of American Society of Clinical Oncology

RESOURCES