Skip to main content
Health Care Financing Review logoLink to Health Care Financing Review
. 2007 Summer;28(4):83–93.

Physician Code Creep: Evidence in Medicaid and State Employee Health Insurance Billing

Eric E Seiber
PMCID: PMC4195000  PMID: 17722753

Abstract

This study estimates a fixed effects ordered logit model physician office visit billing using claims data from South Carolina Medicaid and the State Employees Health Plan. The results find code creep increasing expenditures on physician office visits at a rate of 2.2 percent annually for both programs, with no significant difference in the rate between the two. The models also indicate that physician billing patterns differ between the programs, with the Medicaid claims averaging 1.3 percent less per visit than comparable State Employees Health Plan claims.

Introduction

Many studies refer to code creep as a contributing factor to improper billing, but policymakers have few estimates of its magnitude to use for guidance. Despite the lack of studies estimating code creep and improper billing, the 2005 Deficit Reduction Act progressively increases funding for the Medicaid Integrity Program, reaching its maximum of $75 million in 2009. With few studies to guide policy, Medicaid agencies have little guidance on whether code creep is a problem they should target with the assistance of the 2005 Deficit Reduction Act. This article estimates an upper bound for code creep in physician office billing for the State Medicaid Program in South Carolina.

A formal definition of code creep is elusive, but Steinwald and Dummit (1989) summarized code creep as “…changes in hospital record keeping practices to increase case mix and reimbursement.” Code creep is also often referred to as upcoding and, in hospital billing, diagnosis-related group (DRG creep). Finally, not all temporal variation in coding falls under code creep. Changes over time in billing can also be attributable to true changes in case mix (sicker patients), improvements in coding (both in provider education and in degree of detail in codes or their definitions), and changes instituted by the payer (program reforms) (Carter, Newhouse, and Relles, 1990).

The code creep literature has focused primarily on hospital billing of DRGs, especially following Medicare's switch to the prospective payment system (PPS) in the 1980s. Results for these early studies proved mixed. Multiple studies did find evidence for DRG creep during the implementation of PPS (Steinwald and Dummit, 1989; Chulis, 1991; Hsia et al., 1988) with the estimates falling below 3 percent. Subsequent studies found no evidence of code creep that could not be attributed to true case mix changes and improved coding practices (Hsia et al., 1992; Carter, Newhouse, and Relles, 1990).

After the articles assessing the billing impact of the switch to PPS, academic interest in code creep became sporadic. Unlike the mixed results examining PPS, later studies produced repeated evidence indicating that code creep exists. Survey data have indicated that 44 percent of health care managers have received pressure from their senior managers to promote coding optimization, but 33 percent reported that their coding behavior varies depending on the payer (Lorence and Richards, 2002; Lorence and Spink, 2002). Other authors have examined specific diagnoses that provide strong incentives to code a higher complexity in that diagnosis family. Silverman and Skinner (2004) found extensive code creep for pneumonia across all hospitals, but the largest increase appeared in for-profit hospitals, hospitals converting to for-profit status, and hospitals where physicians have an equity stake. Similarly, Psaty et al. (1999) examined charts for patients diagnosed with heart failure and could find no documentation in 38 percent of the charts to support higher reimbursement diagnoses. Lastly, code creep has not been limited to U.S. hospitals, with German studies attributing 1 percent of all inpatient payments to code creep (Lungen and Lauterbach, 2000).

Few studies examine physician billing for office visits in the U.S. Two studies in Canada have found that code creep is not limited to hospitals and also occurs in Canadian physician offices (Nassiri and Rochaix, 2006; Chan, Anderson, and Theriault, 1998). Evidence of code creep for physician office billing in the U.S. remains indirect. Wynia et al. (2000) surveyed physicians and found that 39 percent of physicians reported manipulating reimbursement rules, with 54 percent indicating that they were manipulating their billing more frequently in 1998 than they did in 1993. Interestingly, fear of prosecution did not affect the billing decisions of physicians admitting to manipulating reimbursement rules. Lastly, Cromwell et al. (2006) cited code creep as one possible explanation why the physicians in their study dedicated up to 32 percent less time to patient visits than the visit times associated with the Medicare fee schedule.

This study expands on previous work in three ways. First, this study will be the first to examine code creep in Medicaid. Excluding those using survey methods, all code creep studies in the U.S. have examined Medicare data. Second, it will be the first to examine billing for the same providers across two payers by comparing physicians' Medicaid billing to their own billing in the South Carolina State Employees Health Plan. Lastly, this study will be the first to estimate the magnitude of code creep for physician office visit billing. Specifically, this study tests (1) whether physicians bill office visits at equal levels of complexity across the two State programs, (2) whether the billing behavior displays unexplained changes over time, and (3) estimates the rate of increase for physician billing.

Methodology

In State fee-for-service programs, physician prices are routinely set by a fixed price schedule or through negotiations with the payer. Although prices are fixed, physicians still have the power to choose the level of complexity or billing code for the visit. If probability of detection is low, profit maximizing physicians can be expected to choose higher reimbursement codes or upcode on the margin.

Tables 1 and 2 present an overview of the physician's choice set when assigning a code to an office visit. When billing the visit for an established patient (a patient seen previously), the provider must choose from one of the five billing codes listed in Table 1. The American Medical Association (2004) establishes definitions for the codes, and an extensive literature explains and analyzes each in detail (Hill, 2001; King, Sharp, and Lipsky, 2001). For visits dominated by counseling or coordination of care, a physician may use the length of the visit assigned to the billing code. Otherwise, the provider bases the code assignment on the complexity of the visit (documenting the problem's history, examination, and the complexity of the medical decisionmaking). Established visits are most frequently billed by complexity, and in these cases, the visit must meet or exceed the criteria for two of the three complexity categories (history, examination, and medical complexity) listed in Table 1. Lastly, payers reimburse providers for each visit based on the reported complexity and the administratively set rates attached to that billing code.

Table 1. Physician Evaluation and Management Service Codes for Established Patients.

Code Time
(Minutes)
Visit

History Examination Medical Complexity
99211 5 None None None
99212 10 Problem Focused Problem Focused Straightforward
99213 15 Expanded, Problem Focused Expanded, Problem Focused Low
99214 25 Detailed Detailed Moderate
99215 40 Comprehensive Comprehensive High

SOURCE: American Medical Association: Current Procedural Terminology 2005.

Table 2. South Carolina's Reimbursement Rates for Physician Evaluation and Management Service Codes: 2001-2003.

Service Code and Visit Medicaid State Employees Health Plan


2001 2002 2003 2001 2002 2003
New Patient
99201 $31 $30 $30 $46 $33 $33
99202 48 44 44 59 57 57
99203 67 62 64 82 84 84
99204 97 89 91 119 123 123
99205 121 112 116 149 156 156
Established Patient
99211 15 14 14 26 18 18
99212 26 24 26 35 33 33
99213 36 33 35 44 47 47
99214 56 51 55 68 73 73
99215 83 76 81 114 109 109
Consultation1
99241 42 38 38 60 46 46
99242 69 63 63 80 82 82
99243 88 81 81 115 109 109
99244 123 113 114 150 155 155
99245 160 147 148 190 203 203
1

For other provider.

SOURCE: Seiber, E.E., the Ohio State University: Calculations of median reimbursement rates from the 2001-2003 South Carolina Medicaid and State Employee's Health Plan claims data.

Table 2 lists the median reimbursements paid for office visits in South Carolina Medicaid and the State Employees Health Plan programs. These median reimbursements are calculated from the full population of all paid office visit claims and reflect payment adjustments for provider type (nurse practitioner, specialist, etc.). Over the 3 years in the study, reimbursement rates for the established patient visits remained flat for both plans. Reimbursement for the most common Medicaid code, 99213, in 2001 was $36 and in 2003 declined to $35 ($44 and $47 for the State Employees Health Plan). For the less common new patient and consultation codes, reimbursement rates remained flat in the State Employees Health Plan and declined for Medicaid. In 2001 and 2002, Medicaid utilized a separate rate schedule for specialists and paid nurse practitioners at a discount to the general practitioner rate.

The question of whether flat reimbursement rates influence providers' coding of complexity of office visits are examined here. Reimbursement rates influencing providers' coding choices contrasts with the accepted view that prices are exogenous for physicians (that providers accept prices as given). If a physician considered the probability of detection low, a substantial incentive exists for the provider to upcode or report visits of higher complexity. Although payment rates for individual codes changed little over the study period, a provider could obtain a 50/60-percent increase in their reimbursement for a visit by assigning a code one level higher than the true code for the visit.

Data

This study utilizes 2001–2003 health care claims from South Carolina Medicaid and the State Employees Health Plan to estimate a fixed effects ordered logit model of physician office visit billing. The initial data set began as the full population of all paid Medicaid and State Employees Health Plan physician office visits. The analysis excludes claims at locations other than the provider's office. Limited information on providers outside of South Carolina required the elimination of claims from any provider with an address outside of the State. Physicians providing less than 150 total fee-for-service visits to Medicaid and State Employees Health Plan patients over the 3-year period were dropped from the data. Due to the very large number of remaining claims, a random sample was drawn of 500 providers and for each provider, 800 Medicaid visits and 800 State Employees Health Plan visits, (1,600 visits total). The sample retained all Medicaid or State Employees Health Plan claims for physicians that provided less than 800 visits in that program. This sampling procedure produced a final dataset of 204,945 office visits for the 500 providers.

Provider identification proved difficult in some cases. Although every physician is assigned a unique provider identification number, many group practices file all claims under a single group identification number. Since groups share billing resources and behaviors, the model analyzes billing behavior at the group level. The Federal tax identification number (FTIN) filed with each claim allowed the linking of providers (or groups for multiphysician practices) across programs. Not all providers participated in both programs, and some physicians filed claims under separate FTINs for each program. The analysis controls for these physicians who do not participate in both programs or who could not be linked across programs.

Model Specification

The model combines three classes of office visits into a single visit complexity variable. Routine office visits fall under new patient visits (codes 99201-99205), established patient visits (codes 99211-99215), or consultations (codes 99241-99245), with each group broken into five codes representing lowest through highest complexity. The study considers five potential outcomes:

  • Y1 = Codes 99201, 99211, or 99241

  • Y2 = Codes 99202, 99212, or 99242

  • Y3 = Codes 99203, 99213, or 99243

  • Y4 = Codes 99204, 99214, or 99244

  • Y5 = Codes 99205, 99215, or 99245

With the ranked nature of the dependent variable, an ordered logit can estimate the probability of choosing outcome Yj,

Pr(Yj=i)=Pr(ki1<ΣjßjXj+u<ki) (1)

where the probability that the estimated linear function of the independent variables plus a logistic distributed random error lies between the estimated cut-points, ki (Zavoina and McKelvey, 1975; Greene, 2003). Stata® Version 8 (StataCorp LP, 2003) provided a convenient estimator for the ordered logit models.

In equation (1), Xj represents a matrix of independent variables indicating patient demographics and provider characteristics. Although claims data provides a rich source of information of provider behavior, potential independent variables are limited to the fields common to the claims forms for both programs. Given this limitation, the model includes age, sex, marital status, and urban residence to control for patient demographics. A dummy variable identifies providers who can be matched on both lists of participating physicians to control for providers not participating in both programs and those that use separate FTINs when billing Medicaid and the State Employees Health Plan.

Because sicker patients will also produce higher billing codes, the model includes controls for the 15 most expensive conditions and the patient's number of diagnoses that year. The claims data uses International Classification of Diseases, Ninth Revision, Clinical Modification (Centers for Disease Control and Prevention, 2007) codes to classify diagnoses, so the Clinical Classifications Software developed by the Agency for Healthcare Research and Quality (2007) was used to collapse the more than 12,000 potential diagnosis codes into 260 clinically meaningful categories (Elixhauser, Steiner, and Palmer, 2006). From these 260 categories, the model includes dummy variables for the 15 most expensive medical conditions: heart disease, pulmonary conditions, mental disorders, cancer, hypertension, trauma, cerebrovascular disease, arthritis, diabetes, back problems, skin disorders, pneumonia, infectious disease, endocrine, and kidney (Druss et al., 2002; Thorpe, Florence, and Joski, 2004). Lastly, the model includes dummy variables indicating the number of separate conditions, out of the 260 clinical conditions software categories, reported for that patient in the year of the claim.

An array of program and year dummy variables tests the code creep and differential billing hypotheses. A Medicaid dummy flags all claims to Medicaid and tests whether physicians as a whole bill Medicaid differently than the State Employees Health Plan. Interactions between the Medicaid dummy and 2-year dummies test whether Medicaid versus the State Employees Health Plan relationship changes over time. Finally, dummies for 2002 and 2003 test whether physicians are billing increasingly higher codes every year. Table 3 presents the variable means and distribution of the dependent variable.

Table 3. Physician Office Visit Claims for the South Carolina Medicaid and State Employees Health Plan: 2001-2003.

Variable All Claims1 Medicaid2 State Employees Health Plan3



Mean S.D. Mean S.D. Mean S.D.
Complexity Level Visit (Percent)
1 5.50 6.80 4.40
2 24.30 24.70 24.00
3 49.50 48.90 50.00
4 17.60 16.80 18.20
5 3.20 2.80 3.50
Year 2002 0.331 0.471 0.314 0.464 0.346 0.476
Year 2003 0.386 0.487 0.429 0.495 0.35 0.477
Medicaid 0.458 0.498 1 0 0 0
Medicaid 2002 0.144 0.351 0.314 0.464 0 0
Medicaid 2003 0.197 0.398 0.429 0.495 0 0
On Both Lists 0.906 0.292 0.973 0.161 0.849 0.359
Age 18-40 0.246 0.431 0.334 0.472 0.173 0.378
Age 41-50 0.147 0.354 0.117 0.322 0.172 0.378
Age 51-64 0.225 0.418 0.139 0.346 0.299 0.458
Age >= 65 0.157 0.363 0.062 0.242 0.236 0.425
Female 0.676 0.468 0.682 0.466 0.671 0.47
Married 0.411 0.492 0.15 0.357 0.632 0.482
Urban 0.657 0.475 0.643 0.479 0.669 0.47
Number of Diagnoses =4-6 0.196 0.397 0.112 0.316 0.267 0.442
Number of Diagnoses =7-9 0.133 0.339 0.004 0.061 0.242 0.428
Number of Diagnoses >=10 0.176 0.38 0 0.003 0.324 0.468
Heart Disease 0.316 0.465 0.102 0.303 0.497 0.5
Cancer 0.244 0.429 0.056 0.231 0.402 0.49
Trauma 0.338 0.473 0.136 0.343 0.509 0.5
Mental Disorders 0.195 0.396 0.113 0.316 0.265 0.441
Pulmonary Conditions 0.495 0.5 0.254 0.435 0.698 0.459
Diabetes 0.258 0.438 0.089 0.285 0.401 0.49
Hypertension 0.461 0.498 0.164 0.37 0.711 0.453
Cerebrovascular Disease 0.053 0.223 0.012 0.108 0.087 0.282
Arthritis 0.331 0.471 0.126 0.331 0.506 0.5
Pneumonia 0.052 0.222 0.016 0.124 0.082 0.275
Kidney 0.09 0.286 0.034 0.18 0.137 0.344
Endocrine 0.268 0.443 0.097 0.296 0.412 0.492
Skin Disorders 0.396 0.489 0.131 0.337 0.621 0.485
Back Problems 0.331 0.47 0.126 0.332 0.504 0.5
Infectious Disease 0.299 0.458 0.135 0.342 0.437 0.496
1

n=204,945.

2

n=93,915.

3

n=111,030.

NOTE: S.D. is standard deviation.

SOURCE: Seiber, E.E., the Ohio State University: Calculations of median reimbursement rates from the 2001-2003 South Carolina Medicaid and State Employee's Health Plan claims data.

Results

The summary statistics in Table 3 indicate that physicians bill both Medicaid and the State Employees Health plan in a similar manner, despite serving very different demographic groups. In both programs, physicians code one-half of their visits (49 percent for Medicaid and 50 percent for the State Employees Health Plan) at complexity Level Three. The lowest and highest complexities are both uncommon, with only 6 percent billed at Level 1 and 3 percent at Level 5. The remaining visits fall almost equally across the remaining two categories with 24 percent billed at Level 2 and 18 percent at Level 4. Between the two programs, lower complexity visits were marginally more common in Medicaid while the State Employee Health Plan displayed more Level 4 and Level 5 visits.

For the independent variables, Medicaid patients tend to be younger and less likely to be married, but females make two-thirds of the visits in both programs and another two-thirds of visits are made by individuals living in urban areas. Providers that cannot be matched across both datasets are more likely to appear in the State Employees Health Plan, with 97 percent of visits in Medicaid being made to physicians on both lists compared with 85 percent in the State Employees Health Plan. Finally, the case-mix controls varied widely by the sample drawn and should not be used to infer prevalence of these conditions in the Medicaid and State Employees Health Plan populations.

Table 4 presents two sets of estimates for the ordered logit model with provider fixed effects. Comparing the estimates from the two models (excluding case-mix variables and including case-mix variables) reveals the contribution of a sicker population to billing of higher complexity visits. The provider fixed effects control for time-invariant physician characteristics, including specialty and physician practice patterns.

Table 4. Provider Fixed Effects Ordered Logit Estimates of Visit Complexity: 2001-2003.

Variable Estimated Coeficient1 Standard Error of Estimate1 Estimated Coeficient2 Standard Error of Estimate2
Number of Observations 204,645 203,806
Year 2002 0.139 0.034 *** 0.133 0.034 ***
Year 2003 0.238 0.042 *** 0.227 0.042 ***
Medicaid -0.254 0.057 *** -0.128 0.058 **
Medicaid 2002 -0.003 0.047 -0.004 0.048
Medicaid 2003 0.093 0.058 0.088 0.058
On Both Lists -0.565 0.053 *** -0.436 0.059 ***
Age 18-40 0.119 0.044 ***
Age 41-50 0.27 0.044 ***
Age 51-64 0.335 0.047 ***
Age >= 65 0.299 0.058 ***
Female 0.059 0.014 ***
Married 0.046 0.018 **
Urban -0.07 0.029 **
Number of Diagnoses = 4-6 -0.046 0.022 **
Number of Diagnoses = 7-9 -0.069 0.021 ***
Number of Diagnoses >= 10 -0.106 0.028 ***
Heart Disease 0.026 0.016 *
Cancer -0.028 0.024
Trauma -0.051 0.014 ***
Mental Disorders 0.061 0.015 ***
Pulmonary Conditions 0.039 0.017 **
Diabetes 0.049 0.02 **
Hypertension 0.072 0.02 ***
Cerebrovascular Disease 0.036 0.024
Arthritis 0.015 0.015
Pneumonia 0.037 0.022 *
Kidney 0.014 0.024
Endocrine 0.026 0.016 *
Skin Disorders -0.046 0.019 **
Back Problems 0.061 0.019 ***
Infectious Disease -0.063 0.014 ***
***

p<0.01.

**

p<0.05.

*

p<0.10.

1

Estimates from a model excluding case-mix variables.

2

Estimates from a model including case-mix variables.

SOURCE: Seiber, E.E., the Ohio State University: Calculations of median reimbursement rates from the 2001-2003 South Carolina Medicaid and State Employee's Health Plan claims data.

In both models, visit complexity increased with time (p=0.000). Including the case-mix controls produced only modest reductions in the coefficients for the year dummies. Medicaid visits were billed at lower complexities in both models (p=0.000). The positive coefficients for the Medicaid*Year dummies indicate that the difference between Medicaid and State Employees Health Plan billing decreased over the 3-year period, but the decline was not statistically significant. For the sample used in Table 4, providers participating in both programs billed higher complexity visits, but this result proved sample dependent. All other estimates were robust across repeated samples.

The ordered logit estimates (Table 4) indicate that office visit complexities billed to Medicaid and the State Employees Health Plan did increase over the 3-year period after controlling for case mix and time-invariant physician characteristics, but they reveal little about the magnitude of the increase. Table 5 presents the average predicted probabilities for each complexity, illustrating the effect of code creep on physician billing. For each visit in the data, the model predicts the probability of the physician assigning each complexity level. The simulated values represent the averages for these probabilities for each complexity level (Table 5). Only the values for the simulated variable change, with all other variables in the model retaining their original values.

Table 5. Predicted Visit Complexities Using the Fixed Effect Ordered Logit Coefficients: 2001-2003.

Visit Complexity Level (%) Payment for Average Visit Percent Change

1 2 3 4 5
Without Case-Mix Variables
Year 20011 5.90 25.70 49.70 15.90 2.70 $35.87
Year 20021 5.30 24.30 50.20 17.20 3.00 36.54 1.90
Year 20031 4.50 22.10 50.50 19.20 3.50 37.53 2.70
Medicaid = 02 4.80 23.00 50.40 18.40 3.40 37.14
Medicaid = 12 5.80 25.40 49.70 16.30 2.80 36.05 3.00
With Case-Mix Variables
Year 20011 5.90 25.60 49.70 16.00 2.70 35.91
Year 20021 5.30 24.20 50.20 17.20 3.00 36.55 1.80
Year 20031 4.60 22.20 50.50 19.20 3.50 37.49 2.60
Medicaid = 02 5.10 23.60 50.30 17.90 3.20 36.86
Medicaid = 12 5.50 24.60 49.90 16.90 3.00 36.38 1.30
1

All visits are billed under prevailing billing patterns in 2001, 2002, and 2003.

2

All visits are billed under the billing patterns of the State Employees Health Plan and then with Medicaid billing patterns.

SOURCE: Seiber, E.E., the Ohio State University: Calculations of median reimbursement rates from the 2001-2003 South Carolina Medicaid and State Employee's Health Plan claims data.

Table 5 simulates two scenarios. In the first scenario, all visits are billed under the prevailing billing patterns in 2001, 2002, and 2003. In the second scenario, all visits are billed under the billing patterns representative of the State Employees Health Plan and then with Medicaid billing patterns. Again, all other variables in the model retain their original values. The table shows each scenario, first, based on the model excluding the case-mix control variables, and second, with the case-mix variables.

The scenarios show the complexity of the average visit gradually increasing over the study period. Over the 3 years, Levels 1 and 2 visits become progressively less frequent, with Level 1 declining from 5.9 percent to 4.6 percent and Level 2 decreasing from 25.7 to 22.1 percent. In contrast, visits coded at Levels 3-5 each become more common, with Level 3 increasing from 49.7 to 50.5 percent, Level 4 accelerating from 15.9 to 19.2 percent, and Level 5 increasing from 2.7 to 3.5 percent. Including the case-mix variables produces no appreciable difference on the predicted complexity, with the frequencies changing no more than one-tenth of 1 percent.

The significant difference between Medicaid and State Employees Health Plan billing manifests in the simulations, but the case-mix variables can account for some of the observed differences between the two programs. With the billing patterns typical of the State Employees Health Plan, 50.4 percent of all visits are coded at Level 3, compared with 49.7 percent in Medicaid. Adding in the case-mix controls narrows these differences across all coding options, with Level 3 State Employees Health Plan visits slipping to 50.3 percent, and Medicaid increasing to 49.9 percent. Similarly, Level 4 visits start higher in State Employees Health Plan, at 18.4 percent and Medicaid at 16.3 percent, but this difference declines to 17.9 and 16.9 percent after including the case-mix controls.

After controlling for case mix and physician characteristics, code creep increased the cost of the average visit by 2.2 percent annually over the study period (Table 5). The average costs collapse the billing distributions into a single number and are calculated by multiplying the percent of visits at each complexity by the 2003 Medicaid reimbursement rate for established patient office visits from Table 2. These Medicaid rates were used for both Medicaid and State Employees Health Plan visits and new and established patients, and consultation visits. With this conversion, the average visit in 2001 cost $35.87, increasing by 1.9 percent to $36.54 in 2002, and in 2003 increasing 2.7 percent making the average visit cost $37.53. Including the case-mix controls changes the cost of the average visit increased by no more than $0.04. Based on these average visits, the case-mix controls reduce the code creep estimated from 2.3 percent annually to 2.2 percent per year. Finally, comparing all Medicaid to all State Employees Health Plan visits reveals that physician claims for Medicaid visits averaged $0.48 or 1.3 percent lower than the average State Employees Health Plan visit. This difference is less than one-half of the $1.14 spread between the average Medicaid and State Employees Health Plan visit when the case-mix controls are excluded.

Discussion

The ordered logit estimates and their associated simulations indicate that code creep increased the payments for physician visits by 2.2 percent annually over the study period. Although the existence of code creep should be a concern for Medicaid agencies, only an estimate of the total cost of the issue can indicate whether code creep would prove a worthwhile program integrity target. In 2003, South Carolina's Department of Health and Human Services (2004) spent $73 million on physician office visits out of a total $244 million on all physician services. Excluding increases in utilization, Medicaid can expect code creep to inflate physician office expenditures by 2.2 percent per year or $1.6 million in 2004 and a total of $8.4 million over 2004-2008. It should be noted that these figures only consider physician office visits and exclude hospital-based expenditures. Additional research will be necessary to determine if billing by South Carolina physicians is representative of other States and to determine how code creep in physician office visits compares to other physician and hospital billing.

The key limitation to this study also highlights a difficulty program integrity offices face in addressing code creep. As Carter and colleagues (1990) highlighted, changes in billing can be attributed to true changes in case mix (sicker patients), improvements in coding (provider education), changes instituted by the payer (program reforms), and code creep. South Carolina Medicaid did not implement any program reforms during the study period, and the model includes case-mix variables to control for sicker patients. However, distinguishing code creep from legitimate improvements in coding attributable to provider education would require documentation audits of medical charts. Therefore, the 2.2 percent annual increase attributed to code creep in this article should be considered an upper bound because it was not possible to distinguish code creep from legitimate improvements in coding. Code creep's diffuse nature makes it a difficult problem to address. Expensive, and unpopular chart reviews are unlikely to produce sufficient recoveries from audited physicians, but well publicized audits may hold sufficient deterrent value to make enforcement cost effective.

Conclusions

This study found significant code creep in both South Carolina Medicaid and the South Carolina State Employees Health Plan. No difference in code creep was observed across the two programs, with code creep increasing expenditures on physician office visits at a rate of 2.2 percent annually. The models also indicate that physician billing patterns differ between the two, with the Medicaid claims averaging 1.3 percent less expensive per visit than comparable State Employees Health Plan claims. Controlling for case-mix produced little change in the code creep estimates, but did account for one-third of the difference between the two programs.

Footnotes

The author is with the Ohio State University. The research in this article was supported by funding from the Strom Thurmond Institute of Government and Public Affairs at Clemson University and Clemson University's College of Health, Education, and Human Development's Summer Research Program. The statements expressed in this article are those of the author and do not necessarily reflect the views or policies of the Ohio State University, Clemson University, or the Centers for Medicare & Medicaid Services (CMS).

Reprint Requests: Eric E. Seiber, Ph.D., The Ohio State University, Division of Health Services Management and Policy, College of Public Health, 468 Cunz Hall, 1841 Neil Ave., Columbus, OH 43210. E-mail: eseiber@cph.osu.edu

References

  1. Agency for Healthcare Research and Quality. doi: 10.7326/0003-4819-142-12_part_2-200506211-00012. Internet address: http://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp (Accessed 2007.) [DOI] [PubMed]
  2. American Medical Association. Current Procedural Terminology 2005. AMA Press; Chicago, IL.: 2004. [Google Scholar]
  3. Carter GM, Newhouse JP, Relles DA. How Much Change in the Case Mix Index is DRG Creep? Journal of Health Economics. 1990 Jul;9(4):411–428. doi: 10.1016/0167-6296(90)90003-l. [DOI] [PubMed] [Google Scholar]
  4. Centers for Disease Control and Prevention. International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) Internet address: http://www.cdc.gov/nchs/about/otheract/icd9/abticd9.htm (Accessed 2007.)
  5. Chan B, Anderson GM, Theriault ME. Fee Code Creep Among General Practitioners and Family Physicians in Ontario: Why Does the Ratio of Intermediate to Minor Assessments Keep Climbing? Canadian Medical Association Journal. 1998 Mar;158(6):749–754. [PMC free article] [PubMed] [Google Scholar]
  6. Chulis GS. Assessing Medicare's Prospective Payment System for Hospitals. Medical Care Review. 1991 Summer;48(2):167–206. doi: 10.1177/002570879104800203. [DOI] [PubMed] [Google Scholar]
  7. Cromwell J, Hoover S, McCall N, et al. Validating CPT Typical Times for Medicare Office Evaluation and Management (E/M) Services. Medical Care Research Review. 2006 Apr;63(2):236–255. doi: 10.1177/1077558705285301. [DOI] [PubMed] [Google Scholar]
  8. Druss BG, Marcus SC, Olfson M, et al. The Most Expensive Medical Conditions in America. Health Affairs. 2002 Jul-Aug;21(4):105–111. doi: 10.1377/hlthaff.21.4.105. [DOI] [PubMed] [Google Scholar]
  9. Elixhauser A, Steiner C, Palmer L. Clinical Classifications Software (CCS) Agency for Healthcare Research and Quality; 2006. [Google Scholar]
  10. Greene WH. Econometric Analysis. 5 ed. Prentice Hall; Upper Saddle River, NJ.: 2003. [Google Scholar]
  11. Hill E. How to Get All the 99214s You Deserve. Family Practice Management. 2001 Oct;8(9):43–48. [PubMed] [Google Scholar]
  12. Hsia DC, Ahern CA, Ritchie BP, et al. Medicare Reimbursement Accuracy Under the Prospective Payment System, 1985-1988. Journal of the American Medical Association. 1992 Aug 19;268(7):867–869. [PubMed] [Google Scholar]
  13. Hsia DC, Krushat WM, Fagan AB, et al. Accuracy of Diagnostic Coding for Medicare Patients Under the Prospective-Payment System. New England Journal of Medicine. 1988 Feb;318(6):352–355. doi: 10.1056/NEJM198802113180604. [DOI] [PubMed] [Google Scholar]
  14. King MS, Sharp L, Lipsky M. Accuracy of CPT Evaluation and Management Coding by Family Physicians. Journal of the American Board of Family Practitioners. 2001 May-Jun;14(3):184–192. [PubMed] [Google Scholar]
  15. Lorence DP, Richards M. Variation in Coding Influence Across the USA—Risk and Reward in Reimbursement Optimization. Journal of Management in Medicine. 2002;16(6):422–435. doi: 10.1108/02689230210450981. [DOI] [PubMed] [Google Scholar]
  16. Lorence DP, Spink A. Regional Variation in Medical Systems Data: Influences on Upcoding. Journal of Medical Systems. 2002 Oct;26(5):369–381. doi: 10.1023/a:1016405214914. [DOI] [PubMed] [Google Scholar]
  17. Lungen M, Lauterbach KW. Upcoding—A Risk for the Use of Diagnosis-Related Groups. Deutsche Medizinische Wochenschrift. 2000 Jul;125(28-29):852–856. doi: 10.1055/s-2000-7019. [DOI] [PubMed] [Google Scholar]
  18. Nassiri A, Rochaix L. Revisiting Physicians' Financial Incentives in Quebec: A Panel System Approach. Health Economics. 2006 Jan;15(1):49–64. doi: 10.1002/hec.1012. [DOI] [PubMed] [Google Scholar]
  19. Psaty BM, Boineau R, Lewis K, et al. The Potential Costs of Upcoding for Heart Failure in the United States. American Journal of Cardiology. 1999 Jul;84(1):108–109. doi: 10.1016/s0002-9149(99)00205-2. [DOI] [PubMed] [Google Scholar]
  20. Silverman E, Skinner J. Medicare Upcoding and Hospital Ownership. Journal of Health Economics. 2004 Mar;23(2):369–389. doi: 10.1016/j.jhealeco.2003.09.007. [DOI] [PubMed] [Google Scholar]
  21. South Carolina Department of Health and Human Services. South Carolina Medicaid Annual Report for State Fiscal Year 2004. 2004.
  22. StataCorp LP. Stata Statistical Software: Release 8. Stata® Press; College Station, TX.: 2003. [Google Scholar]
  23. Steinwald B, Dummit L. Hospital Case-Mix Change Under Medicare: Sicker Patients or DRG Creep? Health Affairs. 1989 Summer;8(2):35–47. doi: 10.1377/hlthaff.8.2.35. [DOI] [PubMed] [Google Scholar]
  24. Thorpe KE, Florence CS, Joski P. Which Medical Conditions Account for the Rise in Health Care Spending? Health Affairs. 2004 Jul-Dec;23(Supplement 2):437–445. doi: 10.1377/hlthaff.w4.437. [DOI] [PubMed] [Google Scholar]
  25. Wynia MK, Cummins DS, VanGeest JB, et al. Physician Manipulation of Reimbursement Rules for Patients: Between a Rock and a Hard Place. Journal of the American Medical Association. 2000 Apr 12;283(14):1858–1865. doi: 10.1001/jama.283.14.1858. [DOI] [PubMed] [Google Scholar]
  26. Zavoina W, McKelvey RD. A Statistical Model for the Analysis of Ordinal Level Dependent Variables. Journal of Mathematical Sociology. 1975;4:103–120. [Google Scholar]

Articles from Health Care Financing Review are provided here courtesy of Centers for Medicare and Medicaid Services

RESOURCES