Introduction
In the 2015 Medicare Access and CHIP Reauthorization Act, the Centers for Medicare and Medicaid Services (CMS) began the transition to a value-based payment system through implementation of the Quality Payment Program (QPP) (1). Representing a paradigm shift away from traditional volume-based reimbursement, the QPP’s goal is to link Medicare payment for provider services to their quality. Participation in the QPP is through either the Merit-based Incentive Payment System (MIPS) or an Advanced Alternative Payment Model (1). MIPS’s goal is to drive improvement in processes and outcomes, use of information, and costs of health care (2). Performance is measured through provider-reported data in four domains: quality, improvement activities, promoting interoperability, and cost (2). On the basis of reported performance, providers earn a positive, negative, or neutral payment adjustment. Collecting, measuring, and reporting quality measure data to the CMS can be accomplished through claims data, electronic health record (EHR) data, a Qualified MIPS Registry, or a Qualified Clinical Data Registry (QCDR) (3).
Qualified Clinical Data Registries
Clinical data registries are observational databases that hold information about a patients’ health status, conditions, procedures, therapies, and outcomes (4). These data may be used to assess and improve the quality, efficiency, and safety of patient care. One type of clinical data registry is a QCDR, which is a CMS-approved entity that collects data for patient and disease tracking to foster improvement in the quality of care (3). A QCDR may be owned and managed by regional collaboratives or specialty societies, but it may not be owned by individuals or specialty groups. By allowing QCDRs to collect data from the EHRs of health care providers and transfer them directly to the CMS, QCDRs were intended to facilitate reporting clinical quality data to MIPS.
A QCDR differs from a qualified MIPS registry, because it is not limited to MIPS measures but can also include custom measures. Although MIPS measures often have a broad focus, custom measures may be developed by medical specialties to more granularly reflect their clinical practice. Therefore, custom measures have greater potential to identify opportunities for quality and cost improvements for specialty-specific care. A single QCDR may submit up to 30 custom measures for review and approval by the CMS.
Kidney Quality Improvement Registry
The Kidney Quality Improvement Registry (KQIR), the only CMS-approved nephrology-specific QCDR owned by a specialty society, was launched by the Renal Physicians Association (RPA) in 2015 (5). The development of this registry was informed by participation in the National Quality Registry Network, which is a voluntary network of organizations operating registries that establishes leading registry practices and supports their development and use (4). The KQIR features a practice-based data network that longitudinally links nephrologists across the nephrology community and includes real-time customized continuous performance monitors, provider- and practice-level comparisons and benchmarking, performance gap analysis, links to quality improvement tools, and other improvement activities (5). In addition to satisfying the clinical data exchange measure of promoting interoperability and supporting MIPS measures, the QCDR is used to develop and test custom measures. In 2019, the KQIR supported 29 CMS-approved MIPS measures and ten custom measures (5). These custom measures include novel measures that are not part of MIPS: angiotensin-converting enzyme/angiotensin receptor blocker use, transplant referral, advance care planning, advance directives completion, timely information transmission to dialysis facilities, thrombectomy rates for fistulas and grafts, peritoneal dialysis catheter infection, and success rates. Hence, the KQIR allows nephrology providers to be measured by specific, relevant, and clinically meaningful measures.
Experience with Kidney Quality Improvement Registry
Several challenges emerged in creating the KQIR and supporting custom measures. First, the development and implementation of sound meaningful custom performance measures required substantial resources and time. Per the CMS, custom measures must fulfill numerous criteria: address a gap in care, be a high-priority aspect of care with relevance to clinical practice, be scientifically rigorous, contain detailed numerator/denominator specifications, be feasible, and not otherwise be part of an MIPS measure (3). Additionally, to ensure that measurement leads to quality improvement, the KQIR sought to create measures that adhere to well recognized quality measurement principles (6,7). Hence, assembling a technical expert panel, identifying data sources, examining reliability and validity testing, and conducting field testing were required. Underscoring the difficulty in meeting these principles, only 37% of QPP measures for ambulatory internal medicine seem to be valid (7).
Second, transferring data into the QCDR from provider records, analyzing the data, and submitting it to the CMS from the QCDR are arduous. Data may be submitted to the QCDR in three ways: a direct data feed from a participating EHR, data uploads using a secure file transfer protocol, or manual entry using a self-service online web tool. These steps require business agreements with clinical practices and EHR vendors, methodology for performance rate calculation, and a data validation plan. Recognizing the reported wide variation in accuracy across electronically reported measures (8), the CMS requires QCDRs to submit a detailed data validation report. Since the KQIR inception, the RPA worked with Premier Inc., which serves approximately 4000 hospitals and health systems and 165,000 other providers and organizations and provides platforms for numerous registries and QCDRs in MIPS. Two particular foci of this collaboration have been improving data quality and increasing the number of EHRs that interoperate with the KQIR.
Third, significant resources are required to attract subscribers to a QCDR. Acknowledging the competing avenues for QPP participation, a multipronged strategy was devised to advertise and promote the KQIR, including sessions at the RPA Annual Meeting, email messaging, and one-on-one webinars. The number of subscribers to the KQIR has grown by >400% since its inception (i.e., 70–352). Additionally, use and submission of verifiable data for approved custom measures by QCDR subscribers may not proceed as planned, because necessary data may require the creation of and abstraction from discrete EHR fields. To enhance awareness and receptivity to custom measures, it is important to have feedback from users regarding clinical relevance, data requirements, performance calculation, and performance goals. Survey data suggest that the most-valued aspect was satisfying the MIPS promoting interoperability measure. Concerns included (1) data interfacing, (2) completion of data fields for custom measures, (3) facilitating benchmarking, (4) enhancing interface user friendliness, and (5) decreasing costs. These are being incorporated in the iterative development of the KQIR. In addition, assessment of specific effects of participation on subscriber clinical behavior and outcomes is planned.
Fourth, along with other custom measure developers, the KQIR encountered numerous hurdles with the CMS’s annual review and approval process for custom measures. Major barriers include (1) a rapid review cycle, (2) unclear and inconsistent evaluation standards, (3) requests for inappropriate changes to measures (e.g., request to consolidate arteriovenous graft and arteriovenous fistula thrombectomy measures in 2016), (4) impractical requests for standardization and harmonization of measures, (5) requests for measuring performance data after insufficient time for adoption and testing, (6) a priori disincentives to use of custom measures, and (7) inadequate appeals process.
Responding to these concerns, the CMS began having monthly support calls to better engage stakeholders in the field. We believe that the CMS should seek better balance between the rigor of measures and the number of measures to include measures that would cover diverse aspects of kidney care (9). The time and resources required for such rigor limit the number of measures possibly proposed in any given year. We also suggest that the CMS should extend the review and approval process for custom measures to at least 2 years, which would ameliorate the discordance between EHR development timelines for custom measure implementation (e.g., adding and validating new data elements) and the CMS’s annual measure review and approval cycle. We further propose that the CMS should decrease the disincentive to use of custom measures by increasing the points awarded for their use. Because they are new and have fewer subscribers using them, custom measures generally do not immediately meet the CMS threshold for performance benchmarking; therefore, they receive less than one third as many points as benchmarked measures, making them less appealing to providers.
Finally, similar to many new endeavors, the financial viability of the KQIR is regularly reviewed. Survey data suggest that costs to subscribers are not a great concern. Although additional fees may be applied by other entities if subscribers adopt an EHR interface and secure file transfer protocol with the KQIR, the annual KQIR fee is $500–$700. These fees are likely offset by MIPS incentive payments and avoidance of penalties. Moreover, potential cost savings include using the data collected for the KQIR for additional purposes (e.g., internal quality assurance and performance improvement). In terms of sponsor costs, the RPA board views the KQIR as a long-term investment (D. Singer, RPA Executive Director, personal communication).
Moving Forward with Qualified Clinical Data Registries
QCDRs have great promise to improve the landscape of quality measurement (9). By serving as comprehensive, clinically detailed, and data-rich resources, they can support measures that are specific, relevant, multifaceted, and focused on key aspects of the care that is actually being delivered for defined patient populations. As with any new process, there must be wide acceptance and alignment of purpose, appropriate resources must be allocated, time must be given, and reasonable accommodations must be made by governing and regulatory organizations.
Disclosures
Dr. Fischer and Dr. Palevsky have nothing to disclose.
Acknowledgments
Dr. Fischer and Dr. Palevsky are members of the Renal Physicians Association Quality, Safety, and Accountability Committee.
The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.
The content of this article does not reflect the views or opinions of the American Society of Nephrology (ASN) or CJASN. Responsibility for the information and views expressed therein lies entirely with the author(s).
Footnotes
Published online ahead of print. Publication date available at www.cjasn.org.
References
- 1.Centers for Medicare and Medicaid Services: Quality payment program overview. Available at: https://qpp.cms.gov/about/qpp-overview. Accessed February 20, 2019
- 2.Center for Medicare and Medicaid Services: MIPS overview. Available at: https://qpp.cms.gov/mips/overview. Accessed February 20, 2019
- 3.Center for Medicare and Medicaid Services: QCDRs and physician compare. Available at: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/physician-compare-initiative/QCDRs-and-Physician-Compare.html. Accessed February 20, 2019
- 4.Physician Consortium for Performance Improvement: About NQRN. Available at: https://www.thepcpi.org/page/NQRN. Accessed February 20, 2019
- 5.Weinstein A, Beckrich A, Singer D: Nephrology registry gives specialty control of quality data. Nephrol News Issues 29: 58–60, 2015 [PubMed] [Google Scholar]
- 6.Krishnan M, Brunelli SM, Maddux FW, Parker TF 3rd, Johnson D, Nissenson AR, Collins A, Lacson E Jr: Guiding principles and checklist for population-based quality metrics. Clin J Am Soc Nephrol 9: 1124–1131, 2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.MacLean CH, Kerr EA, Qaseem A: Time out—charting a path for improving performance measurement. N Engl J Med 378: 1757–1761, 2018 [DOI] [PubMed] [Google Scholar]
- 8.Kern LM, Malhotra S, Barron Y, Quaresimo J, Dhopeshwarkar R, Pichardo M, Edwards AM, Kaushal R: Accuarcy of electronically reported “meaningful use” clinical quality measures: A cross-sectional study. Ann Intern Med 158: 77–83, 2013 [DOI] [PubMed] [Google Scholar]
- 9.Panzer RJ, Gitomer RS, Greene WH, Webster PR, Landry KR, Riccobono CA: Increasing demands for quality measurement. JAMA 310: 1971–1980, 2013 [DOI] [PubMed] [Google Scholar]