Since the passage of the Affordable Care Act in 2010, there has been an increasing push to link physician and other health care provider reimbursements to the quality and value of care delivered, rather than the pure fee for service model that dominated medical payments over the prior 2 decades.1 The Centers for Medicare and Medicaid Services (CMS), which provides nearly 40% of all reimbursements in the United States,2 has focused primarily on hospital-based quality measures to date. However, since 2017, CMS has outlined measures specific to various diseases in the ambulatory care setting as well.3 Many of these are derived from disease-specific quality measure sets published by medical societies such as the American Academy of Neurology (AAN).
The concept of improving quality of care through the use of quality measures makes intuitive sense: standardizing care to ensure that physicians are applying best practices to the care of all patients. And yet, questions remain surrounding the definition of high-quality, value-based care and the optimal methods to operationalize assessment. Most quality measure sets are created by groups of topic experts. Although they carry the support of medical societies, there has been limited assessment of the effect of measure adherence on clinical practice or patient outcomes, raising questions about their validity and ultimately, whether they should be linked to value and quality assessments.
In this issue of Neurology: Clinical Practice, Martello et al.4 aim to assess the relationship between quality measure adherence and a variety of clinical outcome measures in patients with Parkinson disease (PD). The authors were interested in physician adherence to the 2010 AAN PD quality measures, which outlined 10 measures specific to the care of patients with PD.5 The authors conducted a retrospective study of all patients with PD seen at the University of Maryland Movement Disorders Center in the first 3 months of 2013. In the study, the authors reviewed patient medical records, assessing for documentation of quality measure adherence over the 1 year before the visit in 2013 and a variety of clinical outcome measures over the subsequent year (e.g., number of follow-up visits, number of phone calls, death, and change in depression and anxiety scales). The authors evaluated the relationship between the total number of documented quality measures with each outcome and assessed for statistical significance.
The article reports a number of interesting findings. First, measure adherence was highly dependent on site-specific clinical practices and the complexity of the quality measure itself. For example, providers were compliant with annual assessment of cognition around 90% of the time, as this required assessment only once annually and screening was a protocolized part of routine practice at the University of Maryland Movement Disorders Center. Conversely, adherence was lowest on measures that required documentation at every visit, multipart quality measures (requiring completion of all parts), or measures that might be considered by providers, but not necessarily documented (e.g., annual diagnosis review). The authors suggest that clinician training or note templates may help to improve adherence on specific measures. Additionally and most interestingly, the authors found that there was limited correlation between quality measure adherence and clinical outcomes over 1 year. The only outcomes with a statistically significant association with measure adherence (decreased in-person visits and increased patient phone calls or emails) demonstrated weak correlations.
The authors appropriately address the study's limitations. One year of follow-up may be insufficient to assess the effects of quality measures on clinical outcomes. In addition, the single-site design and observation that site-specific practices had a substantial impact on measure adherence limits generalizability of the findings. The authors weighted adherence with each quality measure equivalently in the correlation analysis, despite some measures being more complex or potentially more clinically impactful. Finally, compliance was defined as documentation of the quality measure, which may undercount measures that were reviewed in a visit but not documented, or conversely, overcount measures that were part of templated notes, but not actually reviewed during the visit. The authors aptly point out that although this is a limitation, CMS also evaluates measure adherence through assessment of documentation.
While recognizing the limitations of this relatively small study, the article raises important questions about our impending focus on quality measure adherence and whether this is a valid assessment of high-quality care. Larger studies with longer follow-up periods are clearly necessary to validate quality measures and to assess the broader effects of quality measure adherence on clinical outcomes. In addition, a reconsideration of methods to assess and define quality of care seems appropriate. By relying on review of encounter notes to measure care quality, CMS is certain to improve providers' documentation of quality measure assessment. Whether this translates to actual compliance or improved quality of care, though, remains unclear. Particularly in a field mired in burnout, the idea of metricizing another aspect of a physician's work without evidence of adding value has the potential of wasting the precious resource of time. High-quality care is exactly what we should strive for with all our patients. But forced application of nonvalidated quality measures is unlikely the best method to achieve this, and physicians should demand that time-consuming activities translate to evidence-based value-added for our patients.
Footnotes
See page 58
Author contributions
C.G. Tarolli and R. Barbano: drafting and revising the manuscript.
Study funding
No targeted funding reported.
Disclosure
The authors report no disclosures relevant to this manuscript. Full disclosure form information provided by the authors is available with the full text of this article at Neurology.org/cp.
References
- 1.Value based programs. 2019. Available at: cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/Value-Based-Programs.html. Accessed September 11, 2019.
- 2.NHE-fact-sheet. 2019. Available at: cms.gov/research-statistics-data-and-systems/statistics-trends-and-reports/nationalhealthexpenddata/nhe-fact-sheet.html Accessed September 11, 2019.
- 3.Quality measures requirements—quality payment program. 2019;qpp.cms.gov/mips/quality-measures. Accessed September 11, 2019.
- 4.Martello J, Shulman LM, Barr E, Gruber-Baldini A, Armstrong MJ. Assessment of Parkinson disease quality measures on 12-month patient outcomes. Neurol Clin Pract 2019;10:58–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Cheng EM, Tonn S, Swain-Eng R, Factor SA, Weiner WJ, Bever CT Jr. Quality improvement in neurology: AAN Parkinson disease quality measures: report of the quality measurement and reporting Subcommittee of the American Academy of Neurology. Neurology 2010;75:2021–2027. [DOI] [PMC free article] [PubMed] [Google Scholar]
