Abstract
Medicare adjusts payments to Medicare Advantage (MA) insurers using risk scores that summarize the relationship between fee-for-service (FFS) Medicare spending and beneficiaries’ demographic characteristics and documented health conditions. Research shows that MA insurers have increasingly documented conditions more thoroughly than traditional Medicare—resulting in higher payments to insurers—but little is known about what factors contribute to diverging risk scores. We apportion that divergence between market-wide increases and increases that vary with length of MA enrollment. We also examine whether effects vary across plan types and whether the enrollment duration effect is contingent upon remaining with the same insurer. Using Medicare administrative data from 2008 to 2013, we employ a difference-in-differences model to compare the growth in risk scores of Medicare beneficiaries who switch from FFS to MA to that of beneficiaries who remain in FFS. We find that the effect of MA enrollment on risk scores increased from 5% in 2009 to 8% in 2012 and that continuous enrollment in MA was associated with an additional 1.2% increase per year, regardless of continuous enrollment with an insurer. Thus, even among those who switched to MA in 2009, enrollment duration comprised less than one-third of the coding intensity difference in 2012. We also find that risk scores grew faster in areas with greater MA penetration and among Health Maintenance Organization enrollees. Overall, our findings suggest that market-wide factors contributed most to the increasing divergence between FFS and MA risk scores.
Keywords: Medicare, Medicare Part C, risk adjustment, coding intensity, difference-in-differences
What do we already know about this topic?
Research shows that MA insurers have increasingly documented conditions more thoroughly than traditional Medicare—resulting in higher payments to MA insurers—but little is known about what factors contribute to that divergence.
How does your research contribute to the field?
We apportion that divergence in risk scores to increases that occur for all enrollees each year and to increases that vary with the length of MA enrollment and examine whether the enrollment duration effect is contingent upon remaining with the same insurer.
What are your research’s implications toward theory, practice, or policy?
Overall, our findings suggest that market-wide factors contributed most to the increasing divergence between FFS and MA risk scores.
Introduction
Medicare beneficiaries may elect to receive their health care services through the traditional fee-for-service (FFS) system or through a managed care plan in the Medicare Advantage (MA) program. Under the FFS system, the government generally pays providers based on the services beneficiaries use and bears the risk of higher-than-expected health care costs. By contrast, in the MA program, the government pays insurance plans a flat risk-adjusted fee (or capitation), and the plan in turn pays providers. Under the MA system, plans bear the risk if health care costs exceed Medicare’s payments. In 2015, 31% of Medicare beneficiaries were enrolled in MA.1
Insurers in the MA program have an incentive to enroll beneficiaries who the insurer expects will cost less than the Medicare capitation for that beneficiary. The Medicare program attempts to address that by using a relatively sophisticated risk adjustment mechanism that makes it more difficult for insurers to prospectively determine which individuals will have costs below what Medicare pays. Insurers have an incentive both to find ways to provide health care that costs less than what Medicare FFS would have paid, and to increase Medicare payments by documenting conditions more thoroughly than those conditions would have been documented in the FFS system. This article aims to identify the extent to which insurers are engaging in the latter activity.
More specifically, this study uses risk score data for individual beneficiaries in MA and FFS to explore how MA enrollment affects enrollees’ risk scores. Consistent with prior research, we find substantive coding differences between FFS and MA and that those differences are growing over time. We add to prior research by decomposing that growth into increases that occur for all MA enrollees and to increases that vary with the length of MA enrollment. We next examine whether the length of enrollment effect is related to whether beneficiaries remain enrolled with the same MA insurer. That analysis may shed light on what types of factors are driving the increasing divergence between FFS and MA risk scores. We find that the majority of risk score differences are attributable to increases that affected all MA enrollees and that although length of enrollment also contributed to the divergence, that effect was not contingent upon beneficiaries remaining enrolled with the same insurer. We also find that increases in coding intensity differences varied across MA plan types and, in particular, were stronger for beneficiaries in Health Maintenance Organizations (HMOs). Altogether, those findings suggest that coding intensity differences primarily stem from market-wide mechanisms and that such changes may be the result of health care providers increasingly learning to fully document beneficiaries’ health conditions on the claims that they submit to insurers for reimbursement.
Policy Background
MA insurers are paid using a methodology that reflects local health care spending, plan-specific bids, plan quality, and enrollees’ characteristics. Each year, insurers submit bids to the Centers for Medicare & Medicaid Services (CMS) with the price they propose to charge to provide benefits under Medicare Part A and Part B for a given plan to an average Medicare beneficiary.i CMS compares those bids to benchmarks that reflect Medicare’s FFS spending at the county or regional level. If the bid is less than the benchmark, which is true for most plans, the base payment for a plan includes a rebate that reflects the difference between the bid and benchmark. (Although CMS pays the rebate to MA insurers, they must pass the rebate on to enrollees in the form of additional benefits or lower premiums.) If the bid equals the benchmark, the base payment equals the benchmark. If the bid is greater than the benchmark, the base payment equals the benchmark plus a premium paid by enrollees. That premium equals the difference between the bid and the benchmark. Beginning in 2012, CMS has adjusted benchmarks and rebates based on each plan’s quality ratings.2
CMS also adjusts the base payments for enrollees’ characteristics. Initially, payments were only adjusted for enrollees’ observable characteristics such as age, gender, and eligibility for Medicaid. The Balanced Budget Act of 1997 required CMS to incorporate information about health status into those adjustments. Between 2004 and 2007, CMS phased in the current risk adjustment system, which uses diagnosis information to classify beneficiaries’ health conditions into Hierarchical Condition Categories (HCCs).
Under the current risk adjustment system, CMS uses an algorithm to derive a single risk score for each enrollee based on HCCs and other beneficiary characteristics. That algorithm reflects the relationship between beneficiary characteristics and spending in the FFS population.3,4 An enrollee’s risk score represents the expected difference in spending for each Medicare beneficiary relative to spending for a FFS beneficiary with average risk. CMS centers FFS risk scores around 1.0. Higher numbers reflect higher expected spending and lower numbers reflect lower expected spending.
HCCs for beneficiaries in FFS Medicare are obtained from claims that providers submit to receive payment for services. For many types of providers, HCCs in the FFS population are informational only and do not affect payment.ii In MA, insurers report enrollees’ HCCs to CMS each quarter, along with the type of providers that treated the diagnosed conditions and applicable service dates. Demographic characteristics and HCCs determine risk scores, and CMS multiplies the base payment to MA insurers by those scores (thus increasing payments for higher-risk individuals and reducing them for lower-risk individuals).
Because there are different incentives for MA insurers than for FFS providers, health conditions for MA beneficiaries are widely thought to be more comprehensively documented than are health conditions for FFS beneficiaries, resulting in a “coding intensity difference” between FFS and MA risk scores.5,6 Reflecting concerns about that difference, the Deficit Reduction Act of 2005 required CMS to adjust risk scores for MA beneficiaries before calculating payment amounts. Subsequent legislation specified minimum annual adjustments that will last until CMS switches to a risk adjustment system that relies on MA diagnoses and spending data.7 CMS began incorporating MA diagnoses data into the risk adjustment system in 2015 and intends to rely solely on those data by 2020.8
Understanding Coding Differences
Estimating the magnitude of the difference in coding intensity between MA and FFS is a challenge because differences in risk scores can reflect both coding intensity and selection—the different health profiles of individuals who choose MA rather than FFS. Selection may occur if the beneficiaries who choose to enroll in MA have different health characteristics than beneficiaries who remain in FFS. Indeed, research suggests that before the implementation of the current risk adjustment system, beneficiaries in MA tended to have less morbidity and lower mortality rates than FFS beneficiaries with similar demographic characteristics.9-12 By increasing payments for higher-risk individuals and reducing them for lower-risk individuals, incorporating health conditions into risk adjustment mechanisms reduces insurers’ ability to identify and disproportionately enroll individuals who would be likely to cost insurers less than Medicare’s capitated payments. Evidence on whether selection still exists in MA is mixed, though most studies find that risk adjustment has reduced its magnitude.13-21
Although there are few studies on coding differences, they consistently show that the risk scores of beneficiaries in MA grow faster than the risk scores of beneficiaries in FFS.5,22,23 Moreover, the coding intensity differences are greater than CMS’ adjustments for coding intensity differences and are increasing over time.6,24,25 Several studies have found that coding differences increased over time for all MA enrollees irrespective of how long they were enrolled in MA.6,22 However, studies have also found that coding intensity is related to the length of enrollment in MA. For example, during a 2006 to 2013 study period, MedPAC found that after switching to MA, beneficiaries’ risk scores grew more rapidly than stayers’ risk scores and that the difference in growth rates was directly related to the time beneficiaries remained enrolled in MA.23
This study further explores the mechanisms that contribute to differences in coding intensity along several dimensions. First, we explore the extent to which increasing coding differences occurred for all enrollees irrespective of enrollment length and the extent to which increasing coding differences depended on enrollees’ length of enrollment. Second, we explore the relative contribution of other enrollment patterns on risk score growth including the type of MA plan beneficiaries are enrolled in and whether the enrollment duration effect is conditional on remaining enrolled with the same insurer. Those analyses may reveal some of the factors that have contributed to the faster growth of risk scores among beneficiaries in MA. Understanding the mechanisms that contribute most to the growth in risk scores may provide insights into the strategies that will be most useful when making adjustments for coding intensity differences.
Insurers may employ a number of different strategies for increasing their risk scores beyond what would be reported in FFS. For example, MA insurers may encourage physicians and hospitals to include as comprehensive a list of diagnoses as possible on their claims or encounter records.iii Other strategies insurers might employ include conducting health risk assessments to identify chronic conditions when beneficiaries enroll in MA, reviewing enrollees’ medical records to identify chronic conditions that were not reported in claims and encounter records, and reviewing prior years’ records to ensure that conditions are documented consistently from year to year.
Several of those mechanisms could contribute to market-level increases in coding differences. Technological advances may have allowed insurers to more precisely document risk, or the technology might have become more accessible or affordable, allowing a wider number of insurers to adopt such practices. At the same time, as MA penetration has increased, providers have likely become more experienced at documenting chronic conditions on their claims and encounter records, increasing coding intensity for all Medicare beneficiaries and leading risk scores to grow faster in areas with greater MA penetration. Alternatively, insurers who are better at documenting risk might be able to reduce their bids and thus could offer better benefits and attract a larger percentage of MA enrollees, increasing average coding intensity. If market-level forces are important, then we would expect coding intensity would increase across all MA enrollees, irrespective of their length of enrollment in MA.
Those mechanisms could also contribute to coding intensity differences that vary with length of enrollment if the increased documentation of health conditions continues from one year to the next—even when the beneficiary did not receive treatment for that condition in the subsequent year. Beneficiaries may be diagnosed with chronic conditions in one year such as high blood pressure or diabetes but such conditions might not trigger new medical appointments in subsequent years if they are well-managed. However, if conditions are more likely to be continuously documented from year to year in the claims or medical records of MA enrollees than they are for FFS beneficiaries, then the effects of MA enrollment on risk scores would increase over time. Either insurers or providers could document such conditions in subsequent years: If such documentation is largely through insurers’ efforts, then we would expect to find that coding increases associated with enrollment would be conditional upon beneficiaries remaining enrolled with the same plan.
Methods
Our analysis relied on Medicare administrative data that included information on demographic characteristics, program enrollment, and beneficiary risk scores. We used demographic and enrollment data from the 2008-2012 beneficiary summary files and risk scores from conditions documented during 2008-2012. We supplemented our administrative data with plan-level enrollment data published by CMS that allowed us to match each plan to its parent organization.iv The administrative data include contract and plan enrollment for each beneficiary, but not the insurer (e.g. parent organization). Most insurers operate multiple contracts, each of which can be comprised of multiple plans.
To estimate the effect of MA enrollment on risk scores, we restricted the study population to beneficiaries who were continuously enrolled in Medicare from 2008 through 2013 and were also exclusively enrolled in FFS during 2008. Continuous enrollment allowed us to observe the growth in risk scores over time, and the availability of a FFS-based risk score for all beneficiaries in 2008 ensured that we were able to measure the growth in risk scores from a common FFS baseline. We divided the study population into stayers and switchers: Stayers include beneficiaries that remained in FFS for the entire study period, and switchers include beneficiaries who switched from FFS to MA in any year from 2009 through 2012. That strategy allowed us to limit the effects of selection because we compared the growth in switcher and stayer risk scores from a common FFS baseline. This approach is similar to that employed by Newhouse and others in their studies of selection in MA19,20 and to the approach of MedPAC in its studies of coding intensity.23 Among switchers, we excluded beneficiaries in plans other than HMOs, preferred provider organizations (PPOs), private FFS (PFFS) plans, and special needs plans.v We also excluded switchers if they switched back to FFS during the study period so that we could focus on the effects of switching from FFS to MA. Those selection criteria resulted in a study population comprising 21.0 million stayers and 2.3 million switchers.
Differences Between Switchers and Stayers
Descriptive data show that switchers and stayers were largely similar in terms of demographic and eligibility characteristics (Figure 1). Switchers were younger than stayers, more likely to be a minority race or ethnicity, more likely to have originally been eligible for Medicare on the basis of disability, and less likely to have spent 6 or more months in an institution. Baseline average risk scores are nearly 10% lower for switchers than for stayers (0.90 compared with 0.98), yet switchers experienced much faster increases in risk scores over time (see Figure 2). Risk scores increased 42% for switchers from 2009 to 2013, compared with 29% for stayers.
Figure 1.
Percent of beneficiaries with certain characteristics: comparing those who switch to Medicare Advantage (switchers) to those who stay in fee-for-service (stayers).
Source. Authors’ analysis of Medicare beneficiary summary file and risk adjustment data, 2008-2013.
Note. Medicaid eligibility refers to full Medicaid eligibility; Institutionalized refers to spending at least half of the year in an institution.
Figure 2.
Risk scores for switchers and stayers, 2009-2013.
Source. Authors’ analysis of Medicare beneficiary summary file and risk adjustment data, 2008-2013.
Note. This figure reports raw risk scores that are not normalized and do not reflect CMS’s coding adjustment.
Estimation Strategy
To compare the growth in risk scores between switchers and stayers, we employed a differences-in-differences model with individual and year fixed effects (see Equation 1).
(1) |
The dependent variable, , represents the risk score for beneficiary i for conditions documented in year t. The variables of interest include and . represents a vector of interaction terms between an MA enrollment indicator and year-fixed effects; those enrollment year terms represent the overall effect of being enrolled in MA in a given year. is an enrollment duration term that represents the subsequent number of years of MA enrollment.
and represent individual and year fixed effects. represents a vector of individual time-varying control variables. The vector of individual controls include MA penetration in the beneficiary’s county of residence (defined as the percent of Medicare beneficiaries that are enrolled in MA in a given year), indicators for full and partial dual enrollment in Medicaid and an indicator for spending at least half of the year in an institution.vi We did not control for gender, race, or other beneficiary characteristics that remain constant over time because the individual fixed effects account for those differences.vii
In a specification test, we replaced our dependent variable with an alternative risk adjuster that is less likely to be subject to coding practices, following the methodology described by Colla et al.26 That methodology used the combined annual rates of four low-variation conditions (acute myocardial infarction [AMI], colorectal cancer, hip fracture, and stroke) as a substitute risk adjuster because those conditions all require an acute care hospitalization and are less likely to be susceptible to coding differences. If changes in MA enrollees’ health were driving the results, the coefficients on the MA variables in that model should be similar to the coefficients in our main model.
One limitation of this specification is that unlike Colla et al, we did not have access to beneficiaries’ inpatient hospital claims and diagnoses to determine which MA enrollees had each of the conditions. Instead, we used HCCs as a proxy for the conditions. Unfortunately, in several cases the HCCs did not overlap perfectly with the four conditions, and thus might have been sensitive to some coding intensity differences. The HCCs we used included AMI; breast, prostate, colorectal, and other cancers and tumors (for colorectal cancer); hip fracture/dislocation (for hip fracture); and ischemic or unspecified stroke (for stroke). We also included a specification with AMI likelihood as the dependent variable because that condition overlapped perfectly with the Colla et al methodology.
Our second model tests whether continuous enrollment in MA is conditional upon remaining enrolled with the same insurer by adding a term that reflects the number of years switchers stayed with their current insurer. Among people in the 2009-2011 switch cohorts, 81% remained with the same insurer during the analysis period; specifically, 71% of the 2009 cohort, 85% of the 2010 cohort, and 91% of 2011 cohort did not switch insurers during the study period (the 2012 switch cohort are only enrolled in MA for 1 year). A major limitation to this analysis is the significant collinearity between insurer duration and enrollment duration: The correlation coefficient between and is 0.9.
In Equation 2, is constructed similarly to but represents the number of years since switching to a particular insurer rather than the number of years since switching to MA. Because of substantial merger and acquisition activity, we define duration of enrollment with the same insurer as length of enrollment with either the same contract or parent organization.
(2) |
In addition to our specification test, we conducted several additional robustness checks. Those include one model that replicated the approach used by MedPAC (Table S1), a test for pre-switch differences between switchers and stayers, and limiting the analysis to beneficiaries who moved from one state to another state (Table S2). For additional details on those robustness checks, see the Supplemental Materials.
Results
Table 1 reports regression results. Consistent with existing research, the results for Equation 1 suggest that MA enrollment is associated with an increase in risk scores relative to FFS that has grown over time, from 5.3 percentage points in 2009 to 8.0 percentage points in 2012. Each additional year in MA is associated with a 1.2 percentage point increase in risk scores.viii The controls for dual eligibility and residing in an institution have a substantial and positive effect on risk scores. Finally, each percentage point increase in the MA penetration rate is associated with a 0.6 percentage point increase in risk scores. On average, beneficiaries lived in counties with an MA penetration rate of 20% in 2008 and 25% in 2012, suggesting that the increase in MA penetration increased risk scores by 3 percentage points over our study period. This is consistent with the hypothesis that market-level factors—namely, provider documentation efforts—have significantly contributed to the increased differences in coding intensity and could suggest that our results are attenuated by spillover effects onto coding intensity in the FFS population.
Table 1.
Regression Results.
Model | Equation 1. enrollment year + enrollment duration | Count of low-variable conditions | Acute myocardial infarction | Equation 2. enrollment year + enrollment duration + insurer duration |
---|---|---|---|---|
Enrolled in MA in | ||||
2009 | 0.053***
(51.600) |
0.003***
(6.780) |
0.000***
(2.900) |
0.053***
(51.580) |
2010 | 0.067***
(94.250) |
0.002***
(7.950) |
0.000***
(3.850) |
0.067***
(94.230) |
2011 | 0.070***
(101.900) |
−0.001***
(3.370) |
0.000***
(3.580) |
0.070***
(101.890) |
2012 | 0.080***
(102.900) |
0.000 (0.9) |
0.000 (1.410) |
0.080***
(102.800) |
Years in MA since switch | 0.012***
(28.600) |
0.001***
(4.660) |
−0.000 (1.400) |
0.012***
(16.760) |
Years since switching to current insurer | 0.000 (0.500) |
|||
MA penetration rate | 0.061***
(28.600) |
0.002**
(2.430) |
0.001**
(2.970) |
0.061***
(28.640) |
Eligible for full Medicaid benefits | 0.507***
(780.800) |
0.055***
(198.300) |
0.008***
(88.460) |
0.507***
(780.800) |
Eligible for partial Medicaid benefits | 0.295***
(416.900) |
0.023***
(77.130) |
0.004***
(39.510) |
0.295***
(416.860) |
Institutionalized for 6 months or longer | 0.087***
(90.600) |
−0.032***
(77.990) |
−0.010***
(68.080) |
0.087***
(90.640) |
Number of observations | 116,101,120 | 116,101,120 | 116,101,120 | 116,101,120 |
R 2 | .668 | .566 | .247 | .668 |
Individual fixed effects | Yes | Yes | Yes | Yes |
Year fixed effects | Yes | Yes | Yes | Yes |
Note. MA = Medicare Advantage.
Source. Authors’ analysis of Medicare beneficiary summary file and risk adjustment data, 2008-2013. Values in parentheses are t-values.
p < .10. **p < .05. ***p < .01.
In the specification test, we replaced the dependent variable with two alternatives that are less likely to be affected by coding differences. That analysis found that the coefficients for all five of the MA variables had magnitudes less than 0.005.ix Those results suggest that the underlying health status of MA enrollees did not change relative to that of FFS beneficiaries, implying that the results stem from coding differences rather than selection. (Additional robustness checks in the Supplemental Materials further support that assessment.)
The results from Equation 2 show that length of enrollment with a particular insurer, conditional on length of MA enrollment, does not affect beneficiaries’ risk scores: The estimated effect of remaining enrolled with the same insurer is 0.00. The coefficients on the enrollment year terms remain unchanged in this model. One possible reason for not finding that insurer duration has a separate effect on coding intensity is that MA duration and insurer duration are largely collinear—making it difficult to identify separate effects. However, as shown in Table S3, replacing MA duration with insurer duration alone yields a somewhat smaller coefficient, suggesting that the effect of enrollment may not be driven by continuous enrollment with a particular insurer.
The findings reported here differ from our prior work.27 Previously, we identified insurer duration based on contract alone. However, a wave of cancellations of PFFS plans forced people to switch contracts during our study period. Often, insurers that cancelled the PFFS plans replaced those products with PPOs or HMOs.28 As a result, many beneficiaries who switched contracts may have moved from a cancelled PFFS plan to the same insurer’s PPO or HMO. When we redefined the length of time with the insurer to reflect the parent organization (which would address those plan cancellations), the insurer duration term was no longer significant.
The difference in results between the two methods for identifying insurer duration likely stems from the fact that switches out of cancelled PFFS plans were classified as switching contracts, even when beneficiaries remained with the same insurer. Although enrollment in both PPOs and HMOs increased as the result of PFFS cancellations, the enrollment growth was faster for PPOs, suggesting that beneficiaries exiting PFFS plans were disproportionately enrolling in PPOs.x Therefore, longer contract duration was associated with HMO enrollment. It is plausible that HMOs would be better able to encourage greater coding intensity because they often have stronger provider relationships than other plan types.
We test this hypothesis by interacting our treatment variables with indicators of enrollment in HMO, PPO, and PFFS plans. Results from that model show that enrollment in HMO plans is associated with greater increases in coding intensity—coding intensity differences for beneficiaries enrolled in HMO plans in 2012 range from 9.4% to 15.5%, depending on duration of MA enrollment (See Figure 3).xi Similarly, enrollment in PFFS plans is generally associated with less coding intensity—coding differences for PFFS enrollees in 2012 range from 1.8% to 6.6%. These findings, together with those related to enrollment duration and MA penetration, suggest that provider documentation efforts play a key role in increasing coding intensity.
Figure 3.
Percentage increase in 2012 risk scores for beneficiaries by year of switch and plan type.
Source. Authors’ analysis of Medicare beneficiary summary file and risk adjustment data, 2008-2013.
Decomposing Coding Differences
Our results suggest that increased coding differences over time reflect both increases in average risk scores occurring across all individuals in MA and increases in individual risk scores for enrollees that remain in MA. In Figure 4, we show the specific coding intensity increase for four cohorts of switchers from the time they switch through 2012.
Figure 4.
Percentage increase in risk scores for beneficiaries by year of switch and calendar year.
Source. Authors’ analysis of Medicare beneficiary summary file and risk adjustment data, 2008-2013.
In the year 2012, we find that the coding intensity increase—relative to stayers—was 11.6% for the 2009 cohort but only 8.0% for the 2012 cohort. This suggests that continuous enrollment in MA comprised 31% of the risk score increase observed among the 2009 cohort, while overall coding increases contributed 69% for the year 2012. Alternatively, for the 2011 cohort, continuous enrollment in MA comprised only 13.0% of the total risk score increase.
Overall, our results suggest that risk scores for switchers in our study population were, on average, 9.9% higher in 2012 than they would have been in FFS. Eighty percent of that difference was driven by the market-level effect of being enrolled in MA in that year, while the remaining 20% stemmed from length of enrollment in MA. If we apply our results to the entire population of Medicare beneficiaries enrolled in MA in 2012, then our results suggest that risk scores were, on average, 12.3% higher for those beneficiaries. Even then, only one-third of the difference was driven by length of enrollment in MA. For that calculation, we excluded enrollment in MA prior to 2006 because that is the first year for which the HCC-based risk adjustment model is fully phased in.xii (Fifteen percent of Medicare beneficiaries enrolled in MA in 2012 were new to MA that year, and 36% had been enrolled in MA since 2006. The remaining MA enrollees were roughly evenly distributed across the number of MA enrollment years.) That is a conservative estimate because the HCC model was phased in over the course of 4 years, but our results also may not apply to people who have been enrolled in MA for longer than 4 years or who enrolled in MA upon gaining eligibility for Medicare.
Discussion
In this article, we build on previous research by exploring the mechanisms by which MA enrollment affects the growth in beneficiary risk scores, using individual-level data to control for baseline differences in risk scores. We find that annual, across-the-board increases in coding intensity play a much larger role in the growing difference in risk scores between MA and FFS enrollees than duration of enrollment. The effect of MA enrollment does not appear to rely on continuous enrollment with the same insurer, though our effort to disentangle the effect of MA enrollment duration from length of enrollment with a particular insurer is limited by the fact that insurer duration and overall MA enrollment duration are effectively inseparable in our sample.
One limitation of this analysis is that we only follow MA enrollees for a maximum of 4 years. Therefore, it is unclear whether our estimates would be applicable as beneficiaries remained in MA beyond 4 years. Extending our current analysis to include new years of data would pose some methodological concerns because CMS has made several changes to the risk adjustment system between 2014 and 2017.25 As a result, incorporating newer data would require either shifting to the new risk adjustment model or continuing to use the old model even though coding incentives have changed, neither of which would be an apples-to-apples comparison to previous years. We did replicate the analysis for the years 2010 through 2014 (see Table S4 in the Supplemental Materials) and found that the coding differences were similar to what we observed in the study period and that they increased between 2010 and 2014.
A second limitation is the use of a nonrandom, self-selected study group—people who choose to switch from FFS to MA. Consequently, there is the possibility that switchers leaving FFS may have an inherently different health trajectory than those who stay in FFS. We have attempted to identify that possibility using several different robustness checks and with the specifications that replaced the dependent variable with indicators for having chronic conditions not likely to be subject to coding intensity increases as outlined in the Colla et al analysis.26 That specification found that switchers had no higher incidence of chronic conditions than stayers, but their health trajectories could still differ on some unobservable level.
Because increases in coding intensity translate directly into increased payments to MA plans, there have been many calls for reform to address either the increased documentation of MA enrollees’ conditions or the lack of diagnostic information among FFS beneficiaries. Kronick notes that “diagnostic reporting in FFS Medicare is woefully incomplete” and provides an example: Only 60% of FFS beneficiaries with quadriplegia in one year have a claim with quadriplegia in the subsequent year.29 Increased information on MA enrollees’ diagnoses is not in itself problematic; indeed, detailed information on conditions may help MA plans to better manage health care for their enrollees. However, because MA payments are based on the relationship between conditions and use of resources in the FFS system, differences in coding intensity may result in overpayments to MA plans.
The current policy for reducing coding-related overpayments to MA plans is to apply a coding intensity adjustment to reduce risk scores for all MA enrollees. However, that adjustment is lower than differences in coding intensity and flat annual adjustments for all MA plans treat all plans and beneficiaries equally, despite the fact that coding intensity differences vary across beneficiaries. For example, Figure 5 shows that in 2012, the difference between the total coding intensity increase in the CMS adjustment was 8% for 2009 switchers and only 7% for 2010 switchers. Furthermore, our findings suggest that coding intensity varies across different types of MA plans.
Figure 5.
Differences between risk score increase and CMS adjustment for switchers by year of switch and calendar year.
Source. Authors’ analysis of Medicare beneficiary summary file and risk adjustment data, 2008-2013.
Note. CMS = Centers for Medicare & Medicaid Services.
The challenges associated with risk adjustment in MA may be instructive for other Medicare reforms and for additional public programs. In January 2015, CMS announced a goal of moving 30% of Medicare FFS payments into alternative payment models by 2016 and 50% of FFS payments into such systems by 2018.30 These models will require risk adjustment to function effectively and will face the same challenges that arise in the MA risk adjustment system. Furthermore, almost 80% of Medicaid beneficiaries were enrolled in a managed care plan in 2014,31 and risk adjustment is used in the health insurance marketplaces established by the Affordable Care Act.
As capitated payment models using risk adjustment become more prevalent in the health care market, the MA experience offers several useful lessons for implementing alternative payment mechanisms. First, risk adjustment creates incentives to thoroughly document health risks; that documentation may help health plans and providers manage enrollees’ health. Second, because the accumulation of data on enrollees allows for higher payments in some cases, risk adjustment may favor incumbents and disadvantage new entrants into insurance markets. Finally, the MA experience shows that risk adjustment requires information both on beneficiary characteristics and on the relationship between beneficiary characteristics and resource use. Insufficient information on either beneficiary characteristics or how beneficiary characteristics affect resource use is likely to introduce inequity and inefficiency into the risk adjustment system.
Supplemental Material
Supplemental material, Supplemental_Data for Medicare Advantage Enrollment and Beneficiary Risk Scores: Difference-in-Differences Analyses Show Increases for All Enrollees On Account of Market-Wide Changes by Tamara Beth Hayford and Alice Levy Burns in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
MA plans also submit bids to provide prescription drug benefits under Part D—which has its own risk adjustment system—but this article analyzes risk scores derived from Part A and B benefits only.
Payments for inpatient hospital care reflect both health status and services provided using diagnosis-related groups. CMS is also testing several demonstration programs that involve using HCCs to adjust provider payments.
Encounter records are similar to claims but submitted when the provider is paid under a capitated arrangement. The encounter records show the diagnoses and services rendered but are not the basis of reimbursement.
Monthly enrollment files by contract/plan/state/county can be found here: https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/MCRAdvPartDEnrolData/Monthly-Enrollment-by-Contract-Plan-State-County.html.
For more information on types of MA plans, see CMS, Medicare Managed Care Manual, Chapter 1—General Provisions (January 7, 2011), Section 20—Types of MA Plans, https://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/downloads/mc86c01.pdf.
CMS uses separate models for community and institutional beneficiaries to account for significant cost differences between the two populations. For more information see CMS, Medicare Managed Care Manual, Chapter 7—Risk Adjustment (January 7, 2011), pp. 11–13, https://go.usa.gov/xRJs6. However, we use the community risk score throughout to ensure that we are making an apples-to-apples comparison of the documentation of conditions across people and years.
Although people’s ages change over time, they change at the same rate for each person. The meaningful difference across individuals is their age at the beginning of the analysis period, which does not change over time and is therefore also absorbed by the individual fixed effects.
Although our model includes fixed effects, we do not cluster the standard errors in the main specification because the large sample size limited the processing capabilities of SAS, the analytic software we used. Using nonclustered standard errors might lead us to falsely conclude significance if we have underestimated actual standard errors. For that reason, we conducted a robustness check with a 5% random sample of the study population using clustered standard errors. That analysis yielded statistically significant results. (Results are available from the authors.)
We also tested models with each of the other four conditions as the dependent variables and a model where the dependent variable was the probability of having any of the 4 conditions. In all cases, the MA results were substantively insignificant. The results are available from the authors.
This is consistent with expectations: prior to the policy change, PFFS plans did not have to have provider networks. Beneficiaries who needed to enroll in a new plan because their PFFS plan was cancelled would be likely to prefer a plan with a broader provider network and PPOs generally have broader networks than HMOs.
Results are listed in Table S5 in the supplement. Unreported additional analyses confirm that the plan-type interaction terms are statistically different across plan types.
The year 2007 is the first payment year for which the HCC model was fully phased in, but risk scores in that year were based on coding in 2006.
Author Note: This article has not been subject to CBO’s regular review and editing process. The views expressed here should not be interpreted as CBO’s. The authors thank Tom Bradley, Linda Bilheimer, Jeffrey Kling, Paul Masi, Lyle Nelson, and Daria Pelech for their helpful comments and suggestions.
Declaration of Conflicting Interests: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The authors received no financial support for the research, authorship, and/or publication of this article.
ORCID iDs: Tamara Beth Hayford
https://orcid.org/0000-0002-0404-8744
Alice Levy Burns
https://orcid.org/0000-0003-4474-8558
Supplementary Material: Supplementary material is available for this article online
References
- 1. Kaiser Family Foundation. Medicare Advantage 2015 Spotlight: Enrollment Market Update. http://kff.org/medicare/issue-brief/medicare-advantage-2015-spotlight-enrollment-market-update/. Published June 30, 2015. Accessed July 3, 2018.
- 2. Medicare Payment Advisory Commission. Medicare Advantage Program Payment System. http://www.medpac.gov/docs/default-source/payment-basics/medpac_payment_basics_16_ma_final.pdf. Published October, 2016. Accessed July 3, 2018.
- 3. Centers for Medicare & Medicaid Services. Medicare Managed Care Manual, Chapter 7—Risk Adjustment. 2013. http://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/downloads/mc86c07.pdf. Published June, 2013. Accessed July 3, 2018.
- 4. RTI International. Evaluation of the CMS-HCC Risk Adjustment Model. CMS, Medicare Plan Payment Group. http://www.cms.gov/Medicare/Health-Plans/MedicareAdvtgSpecRateStats/Downloads/Evaluation_Risk_Adj_Model_2011.pdf. Published March, 2011. Accessed July 3, 2018.
- 5. Centers for Medicare & Medicaid Services. Advance Notice of Methodological Changes for Calendar Year (CY) 2010 for Medicare Advantage (MA) Capitation Rates and Part C and Part D Payment Policies. https://www.cms.gov/Medicare/Health-Plans/MedicareAdvtgSpecRateStats/downloads/Advance2010.pdf. Published February 20, 2009. Accessed July 3, 2018.
- 6. Government Accountability Office. Medicare Advantage: Substantial Excess Payments Underscore Need for CMS to Improve Accuracy of Risk Score Adjustments. GAO-13-206. http://www.gao.gov/products/GAO-13-206. Published January, 2013. Accessed July 3, 2018.
- 7. Medicare Payment Advisory Commission. The Medicare Advantage program: status report. In: Report to the Congress: Medicare Payment Policy. http://www.medpac.gov/docs/default-source/reports/chapter-13-the-medicare-advantage-program-status-report-march-2015-report-.pdf?sfvrsn=0. Published March, 2015. Accessed July 3, 2018.
- 8. Government Accountability Office. Medicare Advantage: Limited Progress Made to Validate Encounter Data Used to Ensure Proper Payments. GAO-17-223. https://www.gao.gov/products/GAO-17-223. Published January, 2017. Accessed July 3, 2018.
- 9. McGuire TG, Newhouse JP, Sinaiko AD. An economic history of Medicare Part C. Milbank Q. 2011;89(2):289-332. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Morgan RO, Virnig BA, DeVito CA, Persily NA. The Medicare-HMO revolving door—the healthy go in and the sick go out. N Engl J Med. 1997;337:169-175. [DOI] [PubMed] [Google Scholar]
- 11. Newhouse JP. Pricing the Priceless: A Health Care Conundrum. Cambridge, MA: MIT Press; 2002. [Google Scholar]
- 12. Riley G, Zarabozo C. Trends in the health status of Medicare risk contract enrollees. Health Care Financ Rev. 2006. 28(2):81-96. [PMC free article] [PubMed] [Google Scholar]
- 13. Brown J, Duggan M, Kuziemko I, Woolston W. How does risk selection respond to risk adjustment? Evidence from the Medicare Advantage program. Am Econ Rev. 2014;104(10):3335-3364. [DOI] [PubMed] [Google Scholar]
- 14. Chao Y, Wu C. Medicare HMO coverage selection and its impact on the accumulated health spending over the first four years of Medicare coverage in the United States. J Glob Heal Car Sys. 2013;3(2):1-17. [Google Scholar]
- 15. Jacobson GA, Neuman P, Damico A. At least half of new Medicare Advantage enrollees had switched from traditional Medicare during 2006–2011. Health Aff. 2015;34(1):48-55. [DOI] [PubMed] [Google Scholar]
- 16. McWilliams JM, Hsu J, Newhouse JP. New risk-adjustment system was associated with reduced favorable selection in Medicare Advantage. Health Aff. 2012;31(12):2630-2640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Morrisey MA, Kilgore ML, Becker DJ, Smith W, Delzell E. Favorable selection, risk adjustment, and the Medicare Advantage program. Health Serv Res. 2013;38(3):1039-1056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Newhouse JP, McGuire TG. How successful is Medicare Advantage? Milbank Q. 2014;92(2):351-394. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Newhouse JP, Price M, Huang J, McWilliams JM, Hsu J. Steps to reduce favorable risk selection in Medicare Advantage largely succeeded, boding well for health insurance exchanges. Health Aff. 2012;31(12):2618-2628. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Newhouse JP, McWilliams JM, Price M, Huang J, Fireman B, Hsu J. Do Medicare Advantage plans select enrollees in higher margin clinical categories? J Health Econ. 2013;32(6):1278-1288. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Newhouse JP, Price M, McWilliams JM, Hsu J, McGuire TG. How Much Favorable Selection is Left in Medicare Advantage? National Bureau of Economic Research Working Paper No. 20021. http://www.nber.org/papers/w20021. Published March 2014. Accessed July 3, 2018. [DOI] [PMC free article] [PubMed]
- 22. Kronick R, Welch WP. Measuring coding intensity in the Medicare Advantage program. Medicare Medicaid Res Rev. 2014;4(2):E1–E19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Medicare Payment Advisory Commission. The Medicare Advantage program: status report. In: Report to the Congress: Medicare Payment Policy. http://www.medpac.gov/docs/default-source/reports/chapter-12-the-medicare-advantage-program-status-report-march-2016-report-.pdf. Published March, 2016. Accessed July 3, 2018.
- 24. Geruso M, Layton TJ. Upcoding: Evidence From Medicare on Squishy Risk Adjustment. National Bureau of Economic Research Working Paper No. 21222. http://www.nber.org/papers/w21222. Published October, 2015. Accessed July 3, 2018.
- 25. Medicare Payment Advisory Commission. Status report on the Medicare Advantage program. In: Report to the Congress: Medicare Payment Policy. http://www.medpac.gov/docs/default-source/reports/mar17_medpac_ch13.pdf?sfvrsn=0. Published March, 2017. Accessed July 3, 2018.
- 26. Colla CH, Wennberg DE, Meara E, et al. Spending differences associated with the Medicare physician group practice demonstration. JAMA. 2012;308(10):1015-1023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Burns AL, Hayford TB. Effects of Medicare Advantage Enrollment on Beneficiary Risk Scores. Congressional Budget Office Working Paper No. 2017-08. https://www.cbo.gov/publication/53270. Published November, 2017. Accessed July 3, 2018.
- 28. Pelech D. Dropped out or pushed out? Insurance market exit and provider market power in Medicare Advantage. J Health Econ. 2017;51:98-112. [DOI] [PubMed] [Google Scholar]
- 29. Kronick R. Projected coding intensity in Medicare Advantage could increase Medicare spending by $200 billion over ten years. Health Aff. 2017;36(2):320-327. [DOI] [PubMed] [Google Scholar]
- 30. Centers for Medicare & Medicaid Services. Better Care. Smarter Spending. Healthier People: Paying Providers for Value, Not Volume. https://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2015-Fact-sheets-items/2015-01-26-3.html. Published January, 2015. Accessed July 3, 2018.
- 31. Mathematica Policy Research. Medicaid Managed Care Enrollment and Program Characteristics, 2014. Centers for Medicare & Medicaid Services; https://www.medicaid.gov/medicaid/managed-care/enrollment/index.html. Published Spring, 2016. Accessed July 3, 2018. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental material, Supplemental_Data for Medicare Advantage Enrollment and Beneficiary Risk Scores: Difference-in-Differences Analyses Show Increases for All Enrollees On Account of Market-Wide Changes by Tamara Beth Hayford and Alice Levy Burns in INQUIRY: The Journal of Health Care Organization, Provision, and Financing