Skip to main content
Health Services Research logoLink to Health Services Research
. 2020 Nov 9;56(2):178–187. doi: 10.1111/1475-6773.13591

The effects of coding intensity in Medicare Advantage on plan benefits and finances

Paul D Jacobs 1,, Richard Kronick 2
PMCID: PMC7969203  PMID: 33165932

Abstract

Objective

To assess how beneficiary premiums, expected out‐of‐pocket costs, and plan finances in the Medicare Advantage (MA) market are related to coding intensity.

Data Sources/Study Setting

MA plan characteristics and administrative records from the Centers for Medicare and Medicaid Services (CMS) for the sample of beneficiaries enrolled in both MA and Part D between 2008 and 2015. Medicare claims and drug utilization data for Traditional Medicare (TM) beneficiaries were used to calibrate an independent measure of health risk.

Study Design

Coding intensity was measured by comparing the CMS risk score for each MA contract with a contract level risk score developed using prescription drug data. We conducted regressions of plan outcomes, estimating the relationship between outcomes and coding intensity. To develop prescription drug scores, we assigned therapeutic classes to beneficiaries based on their prescription drug utilization. We then regressed nondrug spending for TM beneficiaries in 2015 on demographic and therapeutic class identifiers for 2014 and used the coefficients to predict relative risk.

Principal Findings

We found that, for each $1 increase in potential revenue resulting from coding intensity, MA plan bid submissions declined by $0.10 to $0.19, and another $0.21 to $0.45 went toward reducing plans’ medical loss ratios, an indication of higher profitability. We found only a small impact on beneficiary's projected out‐of‐pocket costs in a plan, which serves as a measure of the generosity of plan benefits, and a $0.11 to $0.16 reduction in premiums. As expected, coding intensity's effect on bids was substantially larger in counties with higher levels of MA competition than in less competitive counties.

Conclusions

While coding intensity increases taxpayers’ costs of the MA program, enrollees and plans both benefit but with larger gains for plans. The adoption of policies to more completely adjust for coding intensity would likely affect both beneficiaries and plan profits.

Keywords: clinical coding, cost sharing, insurance premiums, managed competition, Medicare advantage


What is Already Known on this Topic

  • In recent years, differences between Medicare Advantage (MA) and Traditional Medicare (TM) in patterns of coding caused risk scores in MA to be approximately 7% to 10% higher than they would have been if the same beneficiary were receiving services in TM.

  • In 2018, the coding intensity adjustment applied by CMS was 5.91%. Thus, coding intensity in excess of the coding intensity adjustment potentially increased MA revenue by 1%‐4%.

  • However, virtually nothing is known about what MA plans do with this potential revenue increase. How much of it do they return to beneficiaries in extra benefits? How much do they retain as extra profit?

What This Study Adds

  • We develop a method for calculating coding intensity that relies on prescription drug utilization data to estimate the health risk of beneficiaries, which is not subject to the inconsistent reporting of medical coding in the MA and TM sectors.

  • Using these prescription drug scores, our estimates are the first to show that insurers with higher levels of coding intensity both lower beneficiary premiums and increase their expected profitability through lower medical loss ratios.

1. INTRODUCTION

When governments contract with private insurers to provide health benefits, insurer payments are often adjusted to compensate for differences in the health risk of enrollees. Adjusting for enrollee risk compensates insurers for their underlying costs thereby encouraging participation in public insurance programs. Additionally, risk adjustment helps to dampen insurer incentives to design benefits or cost‐sharing to dissuade high‐risk enrollees or attract low‐risk ones. 1

There is no other setting in the US health system where risk adjustment has been studied as closely as the Medicare Advantage (MA) program. 2 , 3 , 4 , 5 Because plans are paid on the basis of reported diagnoses, they have a strong incentive to encourage healthcare providers and administrators to report as many diagnoses as possible to maximize their revenues—a dynamic that does not exist for beneficiaries in Traditional Medicare (TM). Through a combination of strategies including in‐home health assessments by nurses and retrospective chart reviews, MA plans have been successful in increasing the reported risk scores of their enrollees compared with those in TM. 3 , 5

Researchers have consistently shown that risk scores for MA enrollees are higher than they would be if those same beneficiaries were enrolled in TM. But we are unaware of any studies that show how coding intensity has changed the provision of MA plan benefits. Specifically, when insurers increase risk scores by coding more intensely than in TM, their revenues increase without corresponding changes in the risk profile of their enrollees, although plan costs may simultaneously rise to implement the more intense coding practices. Any net revenues from coding may enable insurers to offer additional benefits or lower premiums in the hopes of attracting enrollees. Alternatively, insurers can retain the surplus as profits as long as they are not constrained by competitive forces or medical loss ratio (MLR) requirements.

In theory, the current MA risk adjustment system appropriately pays insurers for their expected risk of enrollee spending because payments are adjusted for the demographic characteristics and medical conditions that plans report (age, gender, Medicaid status, institutionalized status, and a series of medical condition codes). In practice, evidence suggests that private insurers document and report medical conditions for their enrollees more thoroughly than occurs for enrollees in TM. Research suggests that the size of this coding intensity effect may cause risk scores for MA enrollees to be from 7 to 10 percent higher than would be expected if the same beneficiaries were in TM. 5 , 6

Corporate strategies for maximizing revenues from higher risk scores will likely differ at the insurer level, with insurers pursuing differing strategies based on their information technology resources, level of control over providers, and the potential for finding additional diagnoses among their enrollees. To achieve these objectives, insurers use a number of strategies including home health risk assessments, retrospective reviews of medical records, and various investments in information technology. 7

Each year, plans submit an estimate of the monthly revenues they require to cover their costs of providing Medicare benefits to a beneficiary in average health in the county, referred to as the plan's “bid.” The Centers for Medicare and Medicaid Services (CMS) compares the plan's bid to the benchmark for the county, where the benchmark is an estimate of the amount that fee‐for‐service Medicare spends for a beneficiary in average health. If a plan's bid is below the benchmark in that county, then the plan is paid their bid plus a specified percentage of the difference between the administrative benchmark and their bid (referred to as the plan's “rebate”). Plans are required to use rebated amounts to provide additional benefits to enrollees either in the form of additional coverage (eg, dental or vision care), lower cost‐sharing (eg, lower deductibles or copays), or lower premiums (eg, lower Part D prescription drug premiums).

When a plan's bid is below the local benchmark, beneficiaries are not required to pay any additional premium to enroll in the MA plan. When plans bid above the benchmark, they must charge enrollees the difference between the bid and the benchmark as an additional enrollee premium to cover their projected costs. CMS adjusts plan payments by their risk score, which is referred to as the Hierarchical Condition Code (HCC) score.

Economics provides some intuition for the relationship we should expect between coding intensity and plan behavior. While coding increases plan revenues, insurers also incur costs to obtain codes, including investments in information technology to track beneficiaries and supplying in‐home nursing assessments and chart reviews. In a perfectly competitive market, insurers will invest in coding until the marginal costs of these investments equal their marginal return. Unfortunately, reliable data on the nature of the cost function for coding Medicare Advantage enrollees are sparse, so the profitability of coding is not clear. But, given that coding likely has some marginal costs, in a perfectly competitive market, we should expect some, although not all, of the increased revenues from coding to show up as reductions in bids, with insurers using the larger rebates to provide additional benefits to attract and retain beneficiaries. If an MA plan did not lower its bid in response to more intense coding, and instead used net revenues to enhance profits, other plans would be able to attract enrollees with extra benefits, and the MA plan that attempted to increase profit margins would lose market share.

However, evidence suggests MA insurers do not behave in ways that models of perfect competition would predict. 8 Although plans should bid competitively irrespective of the benchmark, research shows that plans adjust bids in response to benchmark changes and the effects are proportional to the competitiveness of markets. 8 Similarly, we hypothesize that the degree to which insurers pass‐through increased net revenues from coding intensity to bids will be proportional to the level of competition.

2. METHODS

We measure coding intensity as the difference between a plan's HCC risk score and a measure of underlying health risk as proxied by prescription drug use. This measure assumes that the utilization of prescription drugs is not subject to the same inflationary pressures that incentivize greater reporting of diagnosis codes for Parts A and B. We hypothesized that more intense coding—larger differences between HCC and prescription drug scores—would be associated with differences in plan benefits and finances as detailed below.

2.1. Prescription drug‐based risk score

Our independent measure of health risk is a prescription drug‐based risk score that measures the relative health risk of Medicare enrollees. The measure was constructed by decomposing how prospective Medicare claims for hospital and ambulatory services are related to the therapeutic classes associated with prescription drug utilization. 9

We extracted National Drug Codes from the 100 percent Part D claims data and linked each claims’ codes to at least one Rx‐Defined Morbidity Group (Rx‐MG) using version 11.1 of the Johns Hopkins Adjusted Clinical Group System (ACG). Rx‐MGs contain therapeutic indicators for both chronic diseases such as diabetes, HIV/AIDs, and liver disease, as well as some acute symptoms and diseases, including infections, severe pain, and tuberculosis.

We developed a prospective risk adjustment model using Rx‐MGs as predictors of Medicare spending. For Part D enrollees in TM throughout all of 2014, we regressed Medicare‐paid Part A and Part B spending in 2015 (from the 100 percent Standard Analytic Files for Parts A and B) on these Rx‐MGs. We included the same demographic variables CMS uses in the development of the HCC including: 12 age groups for each gender, whether beneficiaries obtained Medicare originally because of a disability (interacted with gender), and whether beneficiaries were eligible for full or partial Medicaid coverage (each interacted with gender). For additional model details, see Appendix S1; for variable means, see Appendix S2.

After developing the model using 2014 and 2015 data, we divided the population of Medicare beneficiaries into two‐year cohorts over the 2007 to 2015 period, where beneficiaries must have been enrolled in Medicare Part D for the entire first year of each two‐year period. We then generated predictions of the relative risk of Medicare spending for all MA and TM beneficiaries enrolled in Part D from 2007 through 2014 (corresponding to their prospective risk from 2008 through 2015).

For the regression analyses of MA plan outcomes, we limited the sample to beneficiaries enrolled in MA as of July in the second year of each cohort. We excluded beneficiaries: (a) with end‐stage renal disease; (b) in CMS demonstration projects; (c) using hospice care; and (d) for whom Medicare was a secondary payer. Our final sample varies depending on the year from 7.2 million beneficiaries enrolled in MA in 2008 to 13.8 million in 2015. Appendix S3 provides additional details of our sample selection.

Because we assigned morbidity using prescription drug claims, our measure of relative risk does not depend on plan‐reported medical diagnoses. Risk adjustment models that rely exclusively on pharmaceutical utilization are comparable in performance to those using medical diagnoses. 10 Because MA plans may influence how providers prescribe medicine compared with stand‐alone Part D drug plans, in Appendix S4, we validated the robustness of the prescription drug‐based estimates by excluding pharmaceuticals for which physicians have discretion in prescribing behavior. Because HCC and drug scores might identify different components of spending and thus partially reflect risk selection rather than coding intensity, in a supplemental analysis, we added the prescription drug‐based risk score to a regression of Medicare claims on the HCC score, finding that the R‐squared increased only marginally from 0.123 to 0.125. A similar result was shown in an analysis sponsored by the Society of Actuaries. 11

2.2. Measures of coding intensity

We measured coding intensity in two different ways. First, we included the CMS‐HCC risk score as well as the prescription drug‐based score for each contract as independent variables, under the assumption that higher HCC scores holding prescription drug scores constant indicated greater coding intensity. By including both the HCC and RxMG scores, this approach allowed for a flexible functional form. Second, we defined coding intensity by dividing each contract's average HCC risk score by the corresponding average for prescription drug‐based risk scores and then dividing this ratio by the HCC‐to‐RxMG ratio for the population enrolled in TM in each beneficiary's county. Below we refer to this model as the “HCC‐RxMG standardized difference” approach. This approach adjusts for local variation in prescription drug utilization, care delivery patterns, and local differences in selection between MA and TM by expressing HCC and prescription drug score differences relative to those differences among TM beneficiaries in a local area.

We refer below to increases in an MA plan's “potential” revenues arising from coding intensity, because plans bidding below the benchmark may lower their bids in response to increases in coding intensity, and increases in rebates will, by statute, not make up the difference.

2.3. Outcome variables

We tested the effects of coding intensity on four plan characteristics that can directly affect beneficiaries: (a) plan bids; (b) plan rebates; (c) enrollee premiums; and (d) expected out‐of‐pocket (OOP) cost‐sharing. We also assessed effects on measures of insurer‐reported finances using MLR data: (a) MLR; (b) revenues; (c) costs; and (d) revenues in excess of costs. We present results using cross‐sectional models calculated at the contract level and at the county level (described below). We denominated outcome variables and coding intensity measures in both levels and logs.

As mentioned above, we expect a negative relationship between coding intensity and bids. With greater coding intensity, plans bidding below their local benchmarks could obtain higher rebates, which would translate into lower enrollee premiums or lower enrollee cost‐sharing. Plans bidding above their local benchmarks would reduce the MA plan's premium if they reduced their bids.

For bids and rebates, we relied on publicly available CMS data summarized at the county and plan type level. 12 We assessed coding intensity effects on risk‐standardized bids and rebates to remove any mechanical relationship between those measures and plan‐reported risk scores. We assigned bid and rebate averages to beneficiaries based on the county they resided in during their last month of MA enrollment in each calendar year as well as the type of plan in which they were enrolled (health maintenance organization (HMO), private fee‐for‐service plan (PFFS), local or regional preferred provider organization (LPPO/RPPO), or special needs plan (SNP)).

We obtained data on each plan's enrollee premiums and expected OOP cost‐sharing for enrollees. 13 We included any supplemental enrollee premium when the MA plan bid exceeded the local benchmark(s). We included Part D premiums because beneficiaries without Part D are excluded from our sample and rebates can be used to offset these costs. CMS produces estimates of the expected OOP spending that beneficiaries would face in each plan using details about plan deductibles, out‐of‐pocket maximums, and other cost‐sharing information. CMS provides estimates for beneficiaries in various states of self‐reported health (excellent, very good, good, fair, and poor). To generate a single summary statistic for OOP cost‐sharing, we used the CMS estimate for beneficiaries in “good” health status. Previous research assessing similar outcomes found that the choice of health status when analyzing differences in OOP across plans does not typically affect results. 14

Because plans may retain revenues not spent to reduce bids or to finance costs associated with more intense coding, we also explored effects on contract‐level finances including the MLR and the difference between revenues and costs. Although using the MLR as an outcome will not distinguish between changes in administrative costs and profits, other research has established a connection between lower MLRs and higher margins both for insurers with larger market power and for individual market insurers subject to the minimum MLR requirements. 15 , 16 We also examined effects on contract revenues, costs, and claims costs. Analyses based on the MLR data were only available for 2014.

2.4. Addressing the endogeneity of health risk

Our preferred results were from analyses of outcomes at the contract level, because, as noted earlier, coding intensity results from strategies pursued by insurers. However, insurers also set benefits and premiums with the expectation of attracting beneficiaries with a certain health profile, and insurers may well differ in whether and how they employ these strategies. To address this, in Appendix S5, we analyze whether county‐level outcomes were related to county‐specific measures of coding intensity, removing risk selection between MA plans as a potential explanation of our results. We also included health risk measures defined across the combined MA and TM populations in each county. County‐level analyses addressed the possibility that selection between the MA and TM sectors is driving the results, because it is very unlikely that Medicare recipients choose their county of residence based on the relative attractiveness of MA. We found that counties with larger differences between CMS‐HCC scores and prescription drug‐based risk scores had lower bids and more generous benefits and the results mirrored our contract‐level estimates suggesting that selection between plans did not spuriously affect our estimates.

2.5. Identification

The validity of our empirical approach assumes the independence of our coding measure with other potential determinants of plan outcomes. Because a relationship between benchmarks and bids has been previously established, 8 we included the CMS‐specified benchmark as a control variable. And because both bids and coding are likely influenced by the plan's provider network and identifiable characteristics of the enrolled population, we included control variables for: whether the contract had mostly (75 percent or more) HMO, LPPO, RPPO, or PFFS plans, whether the contract had mostly (75 percent or more) SNP plans, whether the contract had most of its enrollment (50 percent or more) in one of nine Census regions, and quartile identifiers for the percentage of each contract's Medicare population that is enrolled in the Part D Low‐Income Subsidy program. Year‐specific fixed effects were included in all models. While higher drug scores could be a result of greater generosity of the Part D prescription drug plan, in Appendix S6, we show that the score is not related to whether the beneficiary enrolled in supplemental coverage to cover cost‐sharing.

2.6. Supplemental analyses

Insurers are likely to pass back more of their net revenues to richer benefits or lower premiums in areas with greater competition for enrollees. To test this theory, we explored whether the results differed by the local market's competitiveness as measured by the weighted average of each county's Herfindahl‐Hirschman Index (HHI) for the concentration of MA insurers. HHI was defined as the sum of the squares of each insurer's market share in a county. The Department of Justice considers markets with an HHI above 2500 to be “highly concentrated.” 17 As a conservative approach, we used a cutoff of 3000 to identify markets where MA insurers may have significant market power.

3. RESULTS

3.1. Geographic variation in coding intensity

Coding intensity in 2015 varied widely across the United States as measured by our HCC‐RxMG standardized difference approach. Figure 1 shows considerable differences visible in the state‐specific HCC‐to‐RxMG ratios. The three states with the largest HCC‐to‐RxMG ratios were: Alaska (1.223), Nevada (1.211), and Georgia (1.151). These levels of coding intensity were roughly 15 to 25 percent higher than in the three states with the smallest ratios: Minnesota (0.963), Hawaii (0.978), and New York (0.997), where the HCC‐to‐RxMG ratios among MA enrollees were lower than or roughly equivalent to those ratios among TM beneficiaries. There was a distinct regional pattern to coding intensity, with more coding intensity in the South, Southwest, and West Coast, and less coding in the Middle Atlantic, New England, and Upper Midwest states. Using this measure of coding intensity, the national average in 2015 was 1.077.

FIGURE 1.

FIGURE 1

Coding intensity in Medicare Advantage, by state, 2015. Note: Coding intensity defined as the ratio of CMS‐HCC risk scores relative to prescription drug‐based risk scores in the Medicare Advantage population relative to the same ratio of scores in the Traditional Medicare population in each state in 2015. This “Standardized Difference” definition of coding intensity is explained in detail in the text [Color figure can be viewed at wileyonlinelibrary.com]

Many of the states with high levels of coding intensity in 2015 also experienced high rates of growth between 2008 and 2015 (Figure 2). Compared to the national average annual percentage point growth in HCC‐to‐RxMG risk scores of 1.1%, the states with the highest growth in their HCC‐to‐RxMG risk score ratios were: Nevada (5.7%), Alaska (4.1%), California (3.5%), Colorado (3.1%), Rhode Island (3.0%), and Florida (3.0%).

FIGURE 2.

FIGURE 2

Growth in coding intensity in Medicare Advantage, by state, 2008‐2015. Note: Coding intensity defined as the ratio of CMS‐HCC risk scores relative to prescription drug‐based risk scores in the Medicare Advantage population relative to the same ratio of scores in the Traditional Medicare population in each state. Growth in coding intensity expressed as average annual percentage point change in this “Standardized Difference” definition. For details, see text [Color figure can be viewed at wileyonlinelibrary.com]

3.2. Coding intensity's effect on plan characteristics at the contract level

Table 1 summarizes four methods for calculating effects on plan characteristics: including level and log versions of both the model with HCC and RxMG scores included separately and the HCC‐RxMG standardized difference model. Results in Table 1 display the marginal change from a $1 increase in potential contract revenues resulting from increased coding. Depending on the model, we found bids fell between $0.10 and $0.19 for each $1 increase in potential revenues from coding. Corresponding to this reduction in bids, rebates increased between $0.06 and $0.15 for each $1 increase in potential revenues.

TABLE 1.

Changes in Medicare Advantage plan characteristics associated with a $1 increase in potential revenues from coding intensity, 2008‐2015

Model HCC and RxMG scores included separately (levels) HCC‐RxMG standardized differences (levels) HCC and RxMG scores included separately (log‐log) HCC‐RxMG standardized differences (log‐log)
Coefficient P‐value Coefficient P‐value Coefficient P‐value Coefficient P‐value
Outcome
Bids −$0.10 *** −$0.18 *** −$0.14 *** −$0.19 ***
Rebates $0.06 *** $0.12 *** $0.11 *** $0.15 ***
Premiums −$0.11 *** −$0.16 *** −$0.13 *** −$0.15 ***
Expected OOP Costs −$0.03 *** −$0.02 ** $0.00 $0.00
Sample size 4470 4470 4470 4470

*** P < .01; ** P < .05; * P < .10; RxMG = Prescription Drug Morbidity Group risk score. Estimates derived from ordinary least squares regressions of the outcome variable on either the CMS Hierarchical Condition Code (HCC) risk score variable holding prescription drug‐based scores constant or the ratio of HCC to RxMG scores in the MA population divided by the ratio of those same scores in the TM population in the counties where beneficiaries in the MA contract resided. For details on the methodology developing these scores, see main text. All regressions include year‐specific fixed effects, the local benchmark, whether the contract mostly had plans of the same plan type (eg, health maintenance organization), whether the contract mostly had special needs plans, indicators for nine Census regions, and the percentage of beneficiaries enrolled in the low‐income subsidy program for Part D benefits.

We found that each potential additional dollar earned from coding intensity was associated with a reduction in premiums of between $0.11 and $0.16. We separately computed coding's effect on the supplemental MA premium for plans bidding above the benchmark—excluding any effect on Part D premiums—and found that, across all models, approximately half the effect of coding intensity lowered supplemental MA premiums and the remaining half lowered Part D premiums (not shown). Results in Table 1 show a reduction in expected OOP of $0.03 or less for each potential additional dollar from coding.

For each additional dollar of potential revenues from coding intensity, contract‐reported revenues go up by $0.52 to $0.60 and costs go up, but by $0.18‐$0.32 less than revenues (Table 2). Increases in claims costs explained nearly the entire increase in overall costs. For each potential dollar received due to coding intensity, between $0.21 and $0.45 went toward reducing MLRs.

TABLE 2.

Changes in Medicare Advantage plan financial characteristics associated with a $1 increase in potential revenues from coding intensity, 2014

HCC and RxMG scores included separately (levels) HCC‐RxMG standardized differences (levels) HCC and RxMG scores included separately (log‐log) HCC‐RxMG standardized differences (log‐log)
Coefficient P‐value Coefficient P‐value Coefficient P‐value Coefficient P‐value
Revenues (PMPY) $0.52 *** $0.60 *** $0.60 *** $0.56 ***
Costs (PMPY) $0.31 *** $0.28 *** $0.40 *** $0.29 ***
Claims costs (PMPY) $0.31 *** $0.26 *** $0.40 *** $0.29 ***
Revenues in excess of costs (PMPY) $0.20 *** $0.32 *** $0.18 *** $0.20 ***
Medical loss ratio −$0.21 *** −$0.41 *** −$0.31 *** −$0.45 ***
Sample size 419 419 419 419

***P < .01; **P < .05; *P < .10; RxMG = Prescription Drug Morbidity Group risk score. Estimates derived from ordinary least squares regressions of the outcome variable on either the CMS Hierarchical Condition Code (HCC) risk score variable holding prescription drug‐based scores constant or the ratio of HCC to RxMG scores in the MA population divided by the ratio of those same scores in the TM population in the counties where beneficiaries in the MA contract resided. For details on the methodology developing these scores, see main text. Regressions include the local benchmark, whether the contract mostly had plans of the same plan type (eg health maintenance organization), whether the contract mostly had special needs plans, indicators for nine Census regions, and the percentage of beneficiaries enrolled in the low‐income subsidy program for Part D benefits.

Contracts where the average HHI was less than 3000 passed back between $0.11 and $0.21 for each additional $1 from coding, whereas contracts in less competitive counties—with HHIs higher than 3000—passed back a maximum of $0.10 (Table 3). For each additional $1 in coding intensity, beneficiaries could expect lower premiums in contracts participating in competitive counties (between $0.14 and $0.20) than those participating in less competitive counties (between $0.02 and $0.04).

TABLE 3.

Changes in Medicare Advantage plan characteristics associated with a $1 increase in potential revenues from coding intensity, by level of parent company concentration in county, 2008‐2015

HCC and RxMG scores included separately (levels) HCC‐RxMG standardized differences (levels) HCC and RxMG scores included separately (log‐log) HCC‐RxMG standardized differences (log‐log)
Coefficient P‐value Coefficient P‐value Coefficient P‐value Coefficient P‐value
HHI < 3000 across counties
Bids −$0.11 *** −$0.20 *** −$0.16 *** −$0.21 ***
Rebates $0.08 *** $0.14 *** $0.15 *** $0.19 ***
Premiums −$0.14 *** −$0.20 *** −$0.16 *** −$0.17 ***
Expected OOP Costs −$0.04 *** −$0.04 *** −$0.01 $0.00
HHI>=3000 across counties
Bids −$0.04 *** −$0.09 *** −$0.04 *** −$0.10 ***
Rebates −$0.01 * $0.05 *** $0.03 *** $0.09 ***
Premiums −$0.03 *** −$0.02 −$0.03 *** −$0.04 **
Expected OOP Costs $0.00 −$0.01 $0.00 −$0.02 *

*** P < .01; ** P < .05; * P < .10; RxMG = Prescription Drug Morbidity Group risk score; HHI = Herfindahl‐Hirschman Index. Estimates derived from ordinary least squares regressions of the outcome variable on either the CMS Hierarchical Condition Code (HCC) risk score variable holding prescription drug‐based scores constant or the ratio of HCC to RxMG scores in the MA population divided by the ratio of those same scores in the TM population in the counties where beneficiaries in the MA contract resided. For details on the methodology developing these scores, see main text. All regressions include year‐specific fixed effects, the local benchmark, whether the contract mostly had plans of the same plan type (eg, health maintenance organization), whether the contract mostly had special needs plans, indicators for nine Census regions, and the percentage of beneficiaries enrolled in the low‐income subsidy program for Part D benefits.

3.3. Robustness to alternative models

While bids fell between $0.10 and $0.19 for each $1 increase in potential revenues from coding when estimated at the contract level (Table 1), they fell between $0.17 and $0.24 when estimated at the county level, where the issue of health selection by beneficiaries should be muted or nonexistent (Appendix S5). Plan rebates increased between $0.06 and $0.15 for each $1 increase in potential revenues when estimated at the contract level and between $0.10 and $0.17 across the six county‐level models. Appendix S5 also confirmed coding intensity's larger effect on premiums than on OOP costs. Most importantly, our estimates were not sensitive to defining risk scores among all Medicare enrollees (column 5), reducing the likelihood that our results are a spurious result of biased selection in Medicare.

4. DISCUSSION

Previous research has clearly documented that MA plans code health conditions for their beneficiaries more intensely than those conditions are coded in TM. However, prior research has offered few insights into how this increased coding affects plan bids, benefits, premiums, or revenues. Our paper is the first we are aware of to establish a link between coding intensity and the level of benefits and premiums that beneficiaries face.

We found MA contracts used a portion of the revenues from increased coding intensity to reduce bids by between $0.10 and $0.19 for every extra dollar of potential revenue due to coding intensity. Bid reductions mostly translated into lower beneficiary premiums rather than OOP costs, a finding that is consistent with previous work on the pass‐through of increased payments to MA plans. 18 Premium reductions were associated with, in roughly equal proportion, lower supplemental MA premiums from plans bidding above the benchmark and plans that used rebates to lower Part D premiums. Larger premium effects may indicate that: (a) enrollees are sensitive to premiums, perhaps because premiums are more salient to beneficiaries than the cost‐sharing features of their plans 19 ; or (b) more generous cost‐sharing may hurt a plan's risk selection profile, so insurers avoid passing‐back larger rebates to lower OOP spending.

We also found that between $0.21 and $0.45 went toward reducing MLRs in response to each potential $1 from coding intensity. These estimates are roughly twice the magnitude of the effects on bids, suggesting revenues from coding intensity are passed back at higher rates to MA insurers than to beneficiaries. This is also consistent with earlier evidence suggesting that limited competition in the MA market enhances profitability. 18 , 20 Further, our results suggest contracts in competitive counties disproportionately used net revenues from coding intensity to reduce beneficiary costs compared with contracts in more consolidated counties. By showing how net revenues from coding intensity are allocated in the form of bids, rebates, benefits, and plan finances including the MLR and underlying costs, our findings add to the literature on competition by linking market power in MA to the extent of pass‐through to benefits. 8 , 18 , 21

Additionally, we found that MA plan costs were higher by between $0.28 and $0.40 for each potential dollar of revenue increase from coding intensity. One interpretation is that prescription drug risk scores do not precisely measure risk, and that variation in the CMS‐HCC score, controlling for the prescription drug score, at least partially reflects variation in the morbidity of enrollees. Alternatively, because plans incur costs to find additional diagnoses, costs are higher in plans that code more intensely. To achieve more thorough coding, plans might attempt to pay providers higher rates to find those diagnoses, which may explain why the association between coding and costs was nearly entirely explained by increases in claims costs.

By relying on the 100% Part D claims files, we constructed a measure of health risk that is independent of the intensity of medical coding associated with HCC risk scores and is consistent for MA and TM beneficiaries. However, the prescription drug‐based score may not perfectly identify actual health risk and therefore our estimates of the effect of HCC scores conditional on RxMG scores may be subject to measurement error. Some MA plans, particularly those that are more highly integrated, may affect drug prescribing patterns, and thus affect our estimated prescription‐based risk scores. Importantly, while we attempted to control for some observable determinants of plan outcomes that could be associated with intensity of coding, including type of network arrangement, our estimates may be biased by other determinants we were not able to identify. For instance, because health risk is positively correlated with our measure of coding intensity, any residual health risk not identified by the prescription drug score suggests our estimates of pass‐through are lower bounds. Additionally, we did not have access to contract‐ or plan‐specific bidding and rebate data, and instead relied on county‐level averages at the plan type level. Note though that our results for premiums and OOP spending, which were derived from plan level data, are consistent with outcomes measured at the county level. The possibility that plans design benefits to attract and retain enrollees with certain characteristics implies health risk coefficients may be biased. We addressed this issue by analyzing data at the county level and redefining health risk measures using the entire MA and TM populations at the county level. Several years have passed since the last year of our analysis (2015) adding additional uncertainty to how insurers have responded to more current incentives to code and to pass‐through any resulting net revenues.

Given the potential financial impact of coding intensity, it is important to understand that plans appear to be using at least some of the revenues from excess coding to lower bids and to enhance benefits. While reductions to local benchmarks after the passage of the Affordable Care Act were expected to substantially reduce enrollment in MA, 22 enrollment growth has in fact been quite robust. More generous benefits resulting from higher rates of coding intensity may be one explanation for the continued MA enrollment growth in the face of benchmark reductions.

Policymakers have also been interested in options to change MA risk adjustment to better align payments with health risk and to fund other spending priorities. 23 As documented elsewhere, we found substantial variation in MA contracts’ coding intensity. 3 , 4 An industry‐wide coding intensity adjustment, if appropriate on average, will inevitably result in an adjustment that is too large for some contracts and too small for others, highlighting the variable impact of CMS’ fixed coding intensity adjustment. Additionally, we found coding intensity does help finance more generous offerings in the MA market and thus reducing revenues would, to some extent, likely lead to higher premiums, fewer benefits, and thus possibly slower enrollment growth. However, increased revenues from coding appear to have larger effects on MLRs, and, likely plan profits, than on the benefits offered to beneficiaries. From these results, it appears that proposals to increase the size of the coding intensity adjustment would have a larger effect on plan profits than on the benefits offered to beneficiaries and the premiums they face. A richer understanding of these dynamics can help policymakers as they seek to address the issue of coding intensity as the part of larger discussions about federal finances and budget requests.

Supporting information

Author Matrix

Supplementary Material

ACKNOWLEDGMENTS

Joint Acknowledgment/Disclosure Statement: The authors would like to thank Pete Welch from the Office of The Assistant Secretary for Planning and Evaluation at the Department of Health and Human Services for his guidance and assistance throughout this project and Thomas M. Selden, Patricia S. Keenan, and Joel W. Cohen of the Agency for Healthcare Research and Quality (AHRQ) as well as Pete Welch for their comments on earlier drafts of the manuscript. The authors also thank Acumen LLC for excellent data and programming assistance. The views expressed in this article are those of the authors, and no official endorsement by the Department of Health and Human Services or AHRQ is intended or should be inferred. No Other Disclosures.

Jacobs PD, Kronick R. The effects of coding intensity in Medicare Advantage on plan benefits and finances. Health Serv Res.2021;56:178–187. 10.1111/1475-6773.13591

REFERENCES

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Author Matrix

Supplementary Material


Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES