Skip to main content
Health Affairs Scholar logoLink to Health Affairs Scholar
. 2026 Apr 12;4(4):qxag061. doi: 10.1093/haschl/qxag061

Reporting rates of topped-out merit-based incentive payment system (MIPS) quality measures, 2017-2023

YoonKyung Chung 1,, Lauren P Nicola 2, Elizabeth Y Rula 3,2
PMCID: PMC13071504  PMID: 41982632

Abstract

Introduction

Merit-based Incentive Payment System (MIPS) quality measures are designated as “topped-out” when reporting clinicians consistently achieve high performance, resulting in potential scoring caps and eventual removal. However, self-selected measure reporting may not provide a representative assessment.

Methods

Using CMS Quality Payment Program Experience datasets from 2017 to 2023 linked with MIPS Quality Measures Lists and Benchmark data, we examined reporting rates of all MIPS topped-out quality measures among eligible physicians and by specialty at the time of their first “topped-out” designation.

Results

Between 2017 and 2023, 643,558 physicians reported specialty-relevant measures across 37 specialties and 275 measures, of which 137 (49%) were topped-out. Over half of the topped-out measures had reporting rates below 5%. Only 11 measures were reported by more than half of eligible physicians. The median reporting rate was 7.1% (IQR [1.3%, 28.2%]) and varied across specialties, ranging from 0.6% in geriatric medicine to 40.4% in pathology.

Conclusion

Our findings suggest CMS topped-out designations may not reflect universally high performance for a measure and highlight challenges of MIPS measure self-selection and topped-out designation. Opportunities exist within MIPS design to maintain measures that broadly promote high quality care for all Medicare beneficiaries and continued improvement.

Keywords: merit-based incentive system (MIPS), quality of care, topped-out MIPS quality measures


Key Takeaways.

  • Out of 137 topped-out quality measures between 2017 and 2023, 70 (51%) were reported by less than 5% of eligible physicians at the time of their topped-out designation.

  • Reporting rates varied substantially across physician specialties, with the lowest median reporting rate observed in geriatric medicine (0.6%) and the highest in pathology (40.4%).

  • Applying topped-out measure policies at the measure-entity level (vs measure-level) could preserve pay-for-performance incentives for reporting clinicians while encouraging quality improvement among non-reporting clinicians.

Introduction

The Quality category in Medicare's Merit-based Incentive Payment System (MIPS) aims to assess the quality of care provided by clinicians using a set of pre-approved measures associated with the Centers for Medicare and Medicaid Services (CMS) quality goals for health care. CMS designates a quality measure as “topped out” when most clinicians who report that measure score at the top end of the distribution.1 Topped-out measures are subject to a maximum score cap of 7 points after 2 consecutive years, limiting score and positive payment adjustments potential, and may be removed after 3 years.2,3

Traditional MIPS allows for self-selection of 6 quality measures for reporting from a broad inventory of measures to allow participants flexibility in determining measures most meaningful and relevant to their practice.1,4 The CMS's cap-and-removal policy encourages clinicians to choose other measures that have room for quality performance improvement and better distinguish variation in care quality across clinicians.1 However, topped-out designation relies only on the performance of reporting clinicians, regardless of the reporting rate. Consistently high scores among this self-selected group may not sufficiently represent measure performance for all eligible clinicians, yet measure cap and removal impact the reporting of all clinicians.

Between 2017 and 2025, 152 of the 318 reportable MIPS quality measures (47.8%) were designated as topped out, 109 (71.7%) of which were capped, and 67 (44.1%) were later removed. Measure capping and removal not only impacts reporting and score opportunities for clinicians, but this policy also adds significant cost to the MIPS program. CMS selects quality measures through structured review, considering among other criteria, their evidence base and importance for improving health care quality and outcomes.5,6 The process is costly; CMS alone spent $62.6M developing 36 quality measures between FY 2016 and 2022.7,8 Premature measure cap-and-removal could hinder care quality improvement opportunities for non-reporting clinicians with unknown quality performance, discourage maintenance of quality among reporting clinicians, and incur unnecessary cost to developing replacement measures.

In this study, we examined the extent to which the topped-out status of measures may reflect the performance of only a small fraction of eligible clinicians, overall and at the specialty level. For measures later subject to a 7-point cap, we also calculated reporting rates in the year before the cap was first imposed.

Study data and methods

Using the 2017 to 2023 CMS MIPS Quality Measures Lists and Quality Benchmark Files, we identified MIPS quality measures first designated as topped out during this period and their associated specialty measure sets. We used the 2017 to 2023 CMS Quality Payment Program Experience datasets to obtain physician annual quality measure reporting, excluding measures calculated and scored by CMS (ie, not physician-reported).

The primary outcome was the cumulative reporting rate for each topped-out MIPS quality measure, calculated as the proportion of eligible physicians (ie, relevant specialties) who reported the measure at least once before or during the year it was first topped out (and the year before it a cap took effect, as applicable). We examined the distribution of cumulative reporting rates among all eligible physicians and by specialty with any assigned measure set (Appendix Table S1). We repeated this analysis for non-topped-out measures for comparison. Specialties were ranked by the percentage of topped-out measures, and summary statistics—including the total number of measures (denominator), the number of topped-out measures (numerator), the numbers of capped and removed measures—were compiled for the entire study period, and for 2023.

Several limitations should be noted. First, measures may have multiple submission/collection types; we classified a measure as topped out if it was topped out for any collection type. Collection type information was only available for 2022, and most (71%) measures were topped out across all types. Second, we excluded QCDR measures reported only through registries, which comprised fewer than 5% of reported measures, but which may increase in share with future adoption of MIPS Value Pathways (MVPs); in 2025, 13 out of 21 MVPs included any QCDR measures.8

Results

Our study included 643 558 MIPS-participating physicians across 37 specialties who reported a total of 275 MIPS quality measures between 2017 and 2023, of which 137(49%) were topped-out during the period (Appendix Table S2). Reporting rates of topped-out measures were generally low with the mean and median cumulative reporting rates of 14.7% and 4.5% (IQR [1.6%, 22.5%]), respectively (Figure 1). The measure with the highest reporting rate was ID #113: Influenza Immunization at 73%. More than half of topped-out measures had a reporting rate below 5%, and only 8.1% exceeded 50%. The reporting rates of non-topped-out measures had mean and median cumulative reporting rates of 8.1% and 0.9% (IQR [0.2%, 6.7%]), respectively (Appendix Figure S1).

Figure 1.

A histogram and a cumulative distribution showing the proportion of eligible physicians who reported topped-out quality measures at least once during 2017 to 2023.

Distributions of the proportion of eligible physicians who reported topped-out quality measures at least once during 2017 to 2023. Source: Authors’ analysis of data from the annual 2017 to 2023 CMS Quality Payment Program Experience datasets, MIPS Quality Measures Lists and Quality Benchmark Files. Notes: The unit of analysis was quality measure. The figure shows the cumulative reporting rate among topped-out measures in 2 ways: 1) a histogram and 2) a cumulative distribution. Topped-out measures in our study were MIPS quality measures that were designated as topped-out by Centers for Medicare & Medicaid Services in any year between 2017 and 2023. There were 137 topped-out measures. The cumulative reporting rate for a topped-out measure is the proportion of eligible physicians who reported the measure at least once before or during the year it was first designated as topped out. Cumulative distribution shows the proportion of topped-out measures that had a cumulative reporting rate equal to or below the given rate. Eligibility was determined based on whether the quality measure was included in the physician's specialty measure set during the year.

Substantial variation in reporting rate was observed across specialties (Figure 2, Appendix Table S3). Excluding specialties with only one measure (pulmonology, endocrinology, and neurosurgery), median cumulative reporting rates of topped-out, specialty-specific measures ranged from 0.6% to 40.4%. The highest median reporting rates were observed in pathology (40.4%), diagnostic radiology (37.8%), oncology/hematology (32.4%), nephrology (31.3%), and anesthesiology (31.2%). The lowest median reporting rates were observed in geriatric medicine (0.62%), internal medicine (0.82%), general practice and family medicine (1.1%), psychiatry (1.5%), and neurology (1.63%). Across all 440 specialty-specific topped-out measures, the median cumulative reporting was 7.1% (IQR [1.3%, 28.2%]). For comparison, the median cumulative reporting rate among all 360 specialty-specific non-topped-out measures was 2.0% (IQR [0.36%, 9.8%]) (Appendix Table S4).

Figure 2.

A chart showing distribution of the proportion of physicians who reported topped-out quality measures at least once during 2017 to 2023, by specialty.

Distribution of the proportion of physicians who reported topped-out quality measures at least once during 2017 to 2023, by specialty. Source: Authors’ analysis of data from the annual 2017 to 2023 CMS Quality Payment Program Experience datasets, MIPS Quality Measures Lists and Quality Benchmark Files. Note: The unit of analysis was quality measure. Topped-out measures were MIPS quality measures that were designated as topped-out by Centers for Medicare & Medicaid Services in any year between 2017 and 2023. Each dot represents the proportion of physicians who reported the specialty-specific topped-out quality measure at least once before or during the year it was first designated as topped out (ie, the cumulative reporting rate of the measure). The median dot represents the median cumulative reporting rate among topped-out measures in that specialty. The median line represents the overall median cumulative reporting rate among 440 specialty-specific topped-out measures. Specialties are abbreviated as follows: Allergy = Allergy/Immunology; Rad = Radiology; Med = Medicine; Family Practice includes General Practice and Family Medicine; Mental Health = Mental/Behavior Health; Ob/Gyn = Obstetrics/Gynecology; Onc/Hematology = Oncology/Hematology; Surg = Surgery; Onc = Oncology; Physical Medicine = Physical Medicine and Rehabilitation.

The availability of specialty-specific MIPS quality measures and the percentage of topped-out measures differed across specialties (Appendix Table S5). Out of 37 specialties, 26 (70.3%) had more than half of their measures first topped out by 2023. Radiation oncology (100%), general surgery (94.1%), diagnostic radiology (93.3%), plastic and reconstructive surgery (92.9%), hospitalists (92.3%), and anesthesiology (90.9%) had the highest proportions. All had fewer than 6 measures that were not capped at 7 points in 2023, meaning physicians in these specialties could not achieve the maximum score by reporting only specialty-relevant measures.

Similar patterns were observed for topped-out measures that were later capped during the study period (Appendix Figure S2). Among 305 specialty-specific capped measures, half had never been reported by 94.4% of eligible physicians before the cap took effect (median = 5.6%; IQR [1.2%, 22.3%]). Median reporting rate varied widely across specialties, from 0.51% (infectious disease) to 48.6% (diagnostic radiology).

Discussion

Between 2017 and 2023, approximately half of MIPS physician quality measures were designated as topped out and nearly 40% of them were removed, however, most of these measures had a low reporting rate. The overall median reporting rate across 137 topped-out measures among eligible physicians was 4.5%; while only 11 of them were reported by a majority. The extent of reporting varied across and within specialties, but no specialty with more than 1 topped-out measure had a median reporting rate above 50%. These findings demonstrate that most measures were topped out based on the performance of a small subset of eligible clinicians, which is insufficient to conclude too limited variation in scores across providers to distinguish quality differences, or no additional opportunity for quality performance improvement.1 Self-selection of measures engenders bias such that the probability of reporting a measure is directly related to performance, resulting in nonrandom missingness and systematic inflation of scores compared to a representative sample.9 Self-selection bias was raised as a concern in a 2018 MedPAC report but it is difficult to prove absent scores for non-reporting physicians.10 However, a study of predecessor pay-for-performance program found evidence that high prior performance predicted a higher likelihood of reporting a measure.11 Therefore, there is likely continued room for quality improvement among topped-out measures. Additionally, the non-topped-out measures did not exhibit higher reporting rates: half of the 138 measures had reporting rates of less than 1% among eligible physicians, and only 5 of them were reported by the majority.

Our study findings help explain the discrepancy between MIPS performance and actual clinical performance found in previous studies.12,13 The low reporting rate observed among topped-out measures and non-topped out measures, alike, implies minimal overlap in reported measures across physicians even within the same specialty, making meaningful comparisons across physicians difficult. CMS's planned transition from traditional MIPS to MVPs, with fewer reportable measures aligned with clinician specialty or medical condition,14 may help resolve some limitations.

More than two-thirds of topped-out measures (92 out of 137) in our data were subject to a 7-point cap, reducing the maximum points that the measure could earn by 30%. As shown in our analysis, this cap policy left several specialties with few uncapped measures, reducing financial incentives high quality care—a concern voiced by affected specialties.15-17 Accordingly, physicians in multi-specialty practices commonly report measures outside their specialty.4,18-21 In response, beginning in 2025, CMS removed the 7-point cap for certain topped-out measures in specialty measure sets with limited measures.3,22

The current MIPS program is the largest pay-for-performance program designed to improve population health and care quality in Medicare. While the cap-and-removal policy of topped-out measures limits clinician reporting of mastered measures, thereby incentivizing them to report on and improve other aspects of care quality, it may also hamper quality improvement in key areas for non-reporting clinicians. One way to achieve both goals is to implement the topped-out policies, including cap-and-removal, at the measure-entity level (as opposed to the measure-level), and apply them only to reporting clinicians or groups who have already achieved the topped-out threshold for a given measure. This approach could ensure that such measures continue to be available to clinicians who have not previously reported them, including new entrants to Medicare, and the subset of reporting clinicians who have not yet achieved the level of care quality associated with the measures. In addition, priority and outcome measures that best reflect the program's goals could be exempt from the cap and removal policy to help maintain high quality in areas essential to improving population health by incentivizing both improvement and maintenance of high quality.

Another policy application is considering topped-out measures with low reporting rates as MVP quality measures. CMS has acknowledged that the development of new measures has not kept pace with the planned retirement of topped-out measures.8 Reconsidering topped-out outcome or high-priority MIPS quality measures in MVPs could increase their reporting rate and encourage continued quality improvement.

In conclusion, most MIPS quality measures were deemed topped out based on scores reported by a small subset of eligible physicians; such scores may not represent consistently high quality across eligible providers. While measure self-selection creates challenges, CMS must balance necessary flexibilities in an already burdensome program with policies that continue to reward high quality performance on metrics that have been rigorously validated. Our results suggest CMS’ rules result in premature determination of topped-out status. Our study highlights the need for potential redesign of quality reporting in the current MIPS program and for policies that encourage meaningful and ongoing adherence to high quality care.

Supplementary Material

qxag061_Supplementary_Data

Contributor Information

YoonKyung Chung, Harvey L. Neiman Health Policy Institute, Reston, VA 20191, USA.

Lauren P Nicola, Triad Radiology Associates, Winston-Salem, NC 27103, USA.

Elizabeth Y Rula, Harvey L. Neiman Health Policy Institute, Reston, VA 20191, USA.

Supplementary material

Supplementary material is available at Health Affairs Scholar online.

Notes

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

qxag061_Supplementary_Data

Articles from Health Affairs Scholar are provided here courtesy of Oxford University Press

RESOURCES