Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Nov 1.
Published in final edited form as: Med Care. 2013 Nov;51(11):10.1097/MLR.0b013e3182a97bdc. doi: 10.1097/MLR.0b013e3182a97bdc

Bending the Cost Curve? Results from a Comprehensive Primary Care Payment Pilot

Sonal Vats *, Arlene S Ash ‡,, Randall P Ellis *,
PMCID: PMC3845668  NIHMSID: NIHMS525969  PMID: 24113816

Abstract

Background

There is much interest in understanding how using bundled primary care payments to support a patient-centered medial home (PCMH) affects total medical costs.

Research Design and Subjects

We compare 2008–2010 claims and eligibility records on about 10,000 patients in practices transforming to a PCMH and receiving risk-adjusted base payments and bonuses, with similar data on approximately 200,000 patients of non-transformed practices remaining under fee-for-service reimbursement.

Methods

We estimate the treatment effect using difference-in-differences, controlling for trend, payer type, plan type, and fixed effects. We weight to account for partial-year eligibility, use propensity weights to address differences in exogenous variables between control and treatment patients, and use the Massachusetts Health Quality Project (MHQP) algorithm to assign patients to practices.

Results

Estimated treatment effects are sensitive to: control variables, propensity weighting, the algorithm used to assign patients to practices, how we address differences in health risk, and whether/how we use data from enrollees who join, leave or change practices. Unadjusted PCMH spending reductions are 1.5% in year 1 and 1.8% in year 2. With fixed patient assignment and other adjustments, medical spending in the treatment group appears to be 5.8% (p=0.20) lower in Year 1 and 8.7% (p=0.14) lower in Year 2 than for propensity-weighted, continuously-enrolled controls; the largest proportional two-year reduction in spending occurs in laboratory test use (16.5%, p=0.02).

Conclusion

Although estimates are imprecise due to limited data and quasi-experimental design, risk-adjusted bundled payment for primary care may have dampened spending growth in three practices implementing a PCMH.

Key Terms: Patient-centered medical home, payment systems, primary care, risk adjustment, Medicare, Medicaid

INTRODUCTION

We examine changes in costs during the first two years of a primary care practice transformation and payment reform initiative started in 2009 by the Capital District Physicians’ Health Plan (CDPHP), a not-for-profit network health plan in upstate New York. This patient-centered medical home (PCMH) pilot is of great interest as a “virtual all-payer” innovation1, with practices encouraged to change treatment protocols for everyone, regardless of payer or benefit design. We examined whether the pilot saved money.

The Centers for Medicare and Medicaid Innovation (CMMI) has funded several pilots and demonstrations to increase value in health care spending.2 One strategy is to encourage primary care practices to become “patient-centered medical homes,” within which teams of clinical professionals use electronic medical records (EMRs)3,4 to sustain the health of a specified panel of patients.5 Ideally, payments to practices support coordinated, preventive care that reduces avoidable utilization.68

The PCMH may save money while maintaining or improving quality.914 However, the best-studied pilots have involved integrated managed care plans, including Kaiser Permanente, the Veterans Health Administration, and Geisinger Health Plan with salaried primary care practitioners (PCPs) and other organizational features uncommon in the US.14,15 Other pilots have primarily retained fee-for-service (FFS) payment with a small coordination–and-management supplement16; few have used models to substantially adjust payments or bonuses for differences in patient risk.

In 2009 three EMR-enabled practices with at least 35% of their workloads covered by CDPHP volunteered for its PCMH pilot. Collectively, they employ fourteen physicians and four other professional staff.1 CDPHP implemented risk-adjusted base payments and outcomes-based bonuses as advocated by Goroll et al17 and developed in Ash and Ellis18, and Ellis and Ash19. In the new system, 63% of payments were calculated as a risk-adjusted “bundle,” 27% as bonus, and only 10% by FFS. Novel features of this pilot include: linked practice transformation and payment reform; diverse plan types and payers; and CDPHP’s not owning hospitals or specialist practices, yet unilaterally self-financing this transformation. While this pilot officially ended in 2010, CDPHP has since expanded this PCMH model to additional primary care practices.1

METHODS

Data and Methodology

We analyzed practices in Albany, Rensselaer, Saratoga, and Schenectady counties, where CDPHP’s three pilot (treatment) practices draw the most patients. We use eligibility, provider, medical and pharmacy claims data for the years 2007–2010, and the Massachusetts Health Quality Project (MHQP) assignment algorithm described in Song et al20 to assign 296,457 patients to 2,526 PCPs billing from 1,122 distinct practices. Broadly, patients are assigned during a year to the primary care practice that provided the plurality of their care in the last 18 months. Appendix A of the Supplemental Digital Content, referenced hereafter as SDC, describes and compares MHQP’s patient assignment algorithm with CDPHP’s.

Difference in Difference Specification

To identify the effect of the PCMH on spending, we estimated

Sijt=λi+γD+φt09+δ(D*t09)+μt10+τ(D*t10)+Xijtβ+εijt (i)

where i indicates a patient; j, his/her assigned practice; and t, year. The dependent variable, S, is annualized spending; D, the treatment dummy; t09 and t10 are time-period dummies for 2009 and 2010 (in contrast to 2008), respectively. The vector X contains individual characteristics including dummies for: Medicare and Medicaid versus the reference category of “privately insured”; HMO, preferred provider organization (PPO) and point of service (POS) versus FFS; and administrative services only (ASO) versus non-ASO contracts. Fixed-effect λi capture patient health status. Standard errors are clustered at the practice level. We modeled the effects of the PCMH using both fixed- and changing-PCP assignment; fixed-assignment estimates are robust to post-implementation changes in patient mix.

Propensity Score Analysis

Table 1 describes treatment and control samples in 2008 and 2010. Privately insured and Medicaid populations are approximately 70% and 20%, respectively, of the control group versus 80% and 10% of those treated. Control group patients average 7 years younger than treatment group patients (36 versus 43) in 2008 — largely because no treatment group practitioners were pediatricians.

Table 1.

Summary Statistics for 2008 and 2010, with Changing PCP Assignment, and Including Entry and Exit

2008 2010

Treatment Control:
unadjusted
Control:
propensity
weighted
Treatment Control:
unadjusted
Control:
propensity
weighted

No of Patients: 11,686 217,276 217,276 10,734 217,957 217,957
Payer Type
  Medicare (%) 8 8 8 10 9 10
  Medicaid (%) 8 18 8 10 23 10
  Privately Insured (%) 84 74 83 80 68 79
Plan Type
  Health Maintenance Organization (HMO) (%) 76 79 76 71 77 72
  Point Of Service (POS) (%) 10 8 10 2 2 2
  Preferred Provider Organization (PPO) (%) 4 4 4 7 6 7
  Exclusive Provider Organization (EPO) (%) 10 10 10 20 15 20
Insurance Type
  Administrative Services Only (ASO) (%) 17 14 17 11 10 11
Gender
  Female (%) 55 55 55 56 55 56
Eligibility Months
  Mean 11.4 11.2 11.4 11.4 11.2 11.4
  Standard Deviation 2.0 2.2 0.5 1.94 2.2 0.4
  Median 12 12 12 12 12 12
Age as of Dec. 31
  Mean 42.7 36.0 42.5 43.9 36.8 43.6
  Standard Deviation 18.64 22.3 4.7 19.21 22.7 4.6
  Median 45 37 46 46 38 47
Lagged Prospective Risk Score
  Mean 1.54 1.56 1.81 1.72 1.76 2.06
  Standard Deviation 7.74 9.51 0.71 9.30 10.51 0.76
  Median 0.93 0.78 0.97 1.01 0.86 1.09
Total Medical Spending
  Mean 3,022 2,895 3,356 3,408 3,337 3,883
  Standard Deviation 32,859 42,777 10,633 33,378 44,088 10,671
  Median 926 744 895 1,026 864 1,049

Note: Eligibility months is the number of enrolled months in a plan offered by Capital District Physicians' Health Plan (CDPHP). Physician assignment is based on Massachusetts Health Quality Project (MHQP) Primary Care Practitioner (PCP) assignment algorithm.

We used propensity score weights to address imbalances. That is, we first modeled the probability that a person is “treated,”21 then weighted each observation by that probability, using the proportional “overlap weight”22 from a logistic model using age, gender, plan type, and payer type. We replicated the Song et al20 algorithm, weighting separately within each study year to achieve comparable (propensity-weighted) mean values of all predictor variables in the control and treatment groups each year (Table 1, first and third columns). We also follow the Medicare program’s method of annualizing spending, and weighting each person-year observation by the fraction of the year he/she is eligible.23

Plan members could receive care from any practice at any time, potentially changing their ex post practice assignment. Indeed, 2,889 members had their assigned PCP changed between control and treatment practices during 2008–2010. Since switching could be endogenous to medical home implementation, our primary analysis assigned each person to their 2008 practices and omitted enrollees who enter and exit; an on-line supplement also reports results from other assignment and selection methods. As a sensitivity analysis we also present results using an alternative propensity scoring approach.

RESULTS

We first examined changes over two years in the (raw) sample means of spending in treatment and control groups, adjusting only for fractional-year eligibility (the data are in the third from bottom row of Table 1). Average cost increased by $442 from 2008 to 2010 for controls, versus $386 (that is, $56 less growth) for those treated. Table 1 shows both the changing composition and spending of treatment and control groups. Analogous findings from 2008 to 2009 are similar: in the pilot’s first year, treatment group average costs grew by $48 less than in the control group. Since these estimates do not control for changes in insurance and who is assigned to the treatment practices, we next used regression analysis with patient-level fixed effects, multiple plan-type controls, and propensity score weighting.

Table 2 summarizes findings from two fixed-effects, difference-in-difference models using weighted least squares; one used fixed- and the other changing-PCP assignment. Each person-year observation during 2008, 2009 or 2010 is weighted by the individual’s eligible months during that year multiplied by their propensity score, with standard errors clustered by practice. These models differ in how they assign a patient year to the treatment or control group. Our preferred model (first two columns) uses Fixed 2008 PCP Assignment, as of 2008, prior to implementation, and excludes new entrants and exiters. Thus, it “holds treatment practices accountable” for all care received by their 2008 patients, even when later care is delivered by a non-PCMH practice; a PCMH does not “get credit” for lowering costs by shedding difficult patients or selectively recruiting healthy ones. With this specification, estimated savings were $198 in the first 12 months (p=.20) and $289 in the second year (p=.15).

Table 2.

Difference-in-Difference Regressions Using Individual Fixed Effects

Fixed 2008 PCP
Assignment, and
Excluding Entry and
Exit
Changing PCP
Assignment, and
Including Entry and
Exit

Dependent Variable: Annualized Medical Spending

(1) (2)

Parameter: Coefficient p-value Coefficient p-value
Treatment × Year2009 −198 0.20 −186 0.31
Treatment × Year2010 −289 0.15 −297 0.24
Medicare 345 0.68 555 0.54
Medicaid −258 0.33 151 0.74
Health Maintenance Organization (HMO) 114 0.38 18 0.95
Preferred Provider Organization (PPO) −1141 0.00 −657 0.01
Point of Service (POS) −1556 0.00 −950 0.01
Administrative Services Only (ASO) 1993 0.01 926 0.01
Year2009 425 0.00 622 0.00
Year2010 865 0.00 1081 0.00
Treatment 730 0.02

No. of Patient Years 410,334 692,270
No. of Clusters (Practices) 941 1,122
R-Squared 0.0020 0.0025
Dependent Mean 3,428 3,413

Notes: Both models are weighted by months of eligibility and propensity scores. Standard errors are clustered for practice IDs. Omitted group is year=2008, private insurance, fee-for-service or exclusive provider organization, and non-ASO. Physician assignment is based on Massachusetts Health Quality Project (MHQP) Primary Care Practitioner (PCP) assignment algorithm.

The second model in Table 2 uses Changing PCP Assignment. Although, patients can enter, exit or be reassigned to a new practice yearly with this specification, point estimates for average treatment effect estimates remain similar in magnitude (−$186 in year 1 and −$297 in year 2), and not statistically significant. A range of model variants, included in the supplementary material, produce similar findings: that is, similarly large, and non-statistically-significant point estimates for the treatment effect in each year.

Although total estimated yearly cost savings are not statistically significant, some subsets of spending are. Sticking with our Fixed 2008 PCP Assignment method, Table 3 presents Year 1 and Year 2 treatment effect estimates resulting from sixteen alternative specifications. Estimated savings change little when omitting controls, focusing on only primary care specialties, or non-pediatric primary care specialties. No statistically significant savings appear by payer type, although there is a hint of smaller savings on Medicaid enrollees relative to Medicare and privately-insured enrollees. Estimated emergency department treatment effects are statistically significant (−11.0%, p=0.01) in Year 1 and remain meaningful (−9.6%, p=0.12) in Year 2. Looking at six outpatient service components, statistically significant reductions were found for evaluation and management visits (−3.4%, p=0.00 in Year 1; −6.5%, p=0.00 in Year 2) and laboratory tests (−16.5%, p=0.02 in Year 2).

Table 3.

Treatment Effect Sensitivity Analysis Using Alternative Samples and Dependent Variables, with Fixed 2008 PCP Assignment, and Excluding Entry and Exit

Year 1 Effects (2009) Year 2 Effects (2010)
Model Name No. of Patient
Years
Dependent
Mean
R-square Coefficient Standard
Error
% effect
2009
p-values Coefficient Standard
Error
% effect
2010
p-values
I. Models of Total Medical Spending, All Payers
  1. All Controls 410,334 3,428 0.0020 −198 156 −5.8% 0.20 −289 199 −8.4% 0.15
  2. Basic Difference-in-Difference 410,334 3,428 0.0018 −198 153 −5.8% 0.20 −298 202 −8.7% 0.14

II. By Physician Specialty
  3. Primary Care Specialties Only 380,320 3,495 0.0021 −184 156 −5.3% 0.24 −270 199 −7.7% 0.17
  4. Non-Pediatric Primary Care Specialties 263,132 3,563 0.0023 −249 163 −7.0% 0.13 −353 212 −9.9% 0.10

III. By Payer
  5. Medicare Only 47,660 6,301 0.0041 −378 400 −6.0% 0.34 −244 391 −3.9% 0.53
  6. Medicaid Only 40,074 1,785 0.0009 −95 226 −5.3% 0.68 72 310 4.1% 0.82
  7. Privately Insured Only 322,600 3,104 0.0019 −178 141 −5.7% 0.21 −311 197 −10.0% 0.12

IV. By Place of Service
  8. Inpatient Care 410,334 912 0.0005 −66 65 −7.2% 0.31 −102 90 −11.1% 0.26
  9. Emergency Care 410,334 92 0.0002 −10 4 −11.0% 0.01 −9 6 −9.6% 0.12
  10. Outpatient Care, and Other Care 410,334 2,391 0.0039 −124 96 −5.2% 0.19 −167 112 −7.0% 0.14

V. By Type of Service
  11. Evaluation & Management 410,334 704 0.0106 −24 8 −3.4% 0.00 −46 9 −6.5% 0.00
  12. Procedures 410,334 841 0.0008 −63 40 −7.5% 0.12 −77 49 −9.1% 0.12
  13. Imaging 410,334 376 0.0004 −19 11 −5.2% 0.07 −15 12 −4.0% 0.20
  14. Tests 410,334 175 0.0008 −14 10 −8.2% 0.15 −29 13 −16.5% 0.02
  15. Durable Medical Equipment 410,334 113 0.0001 −5 9 −4.0% 0.60 1 31 1.0% 0.97
  16. Others 410,334 238 0.0012 −24 18 −10.2% 0.18 −4 37 −1.7% 0.91

Notes: Each row is from a different regression. All models weighted by months of eligibility and propensity scores; standard errors are clustered for practice IDs. All models include individual fixed effects. In addition, models 5–7 include fixed effects for insurance type and plan type, while the remaining models include fixed effects for insurance type, plan type and payer type.

Clinical categories designated according to the Berenson-Eggers Type of Service (BETOS) classification, version 2012, applied to professional claims only. Physician assignment is based on Massachusetts Health Quality Project (MHQP) Primary Care Practitioner (PCP) assignment algorithm.

We also estimated models with CDPHP’s patient assignment algorithm, which uses the HMOs’ reported PCP assignment when available before applying an algorithm that favors primary care specialties over non-primary care specialties. Those results (SDC, Appendix B) also point towards savings, but less strongly than those shown here.

Treatment and control practice samples differ in average risk scores, calculated by applying Verisk Health’s DxCG prospective risk adjustment model to prior-year data (Table 1). Mean risk scores start lower and grow less rapidly for treatment versus control patients, particularly after propensity score weighting. That is, the claims data suggest that treatment group patients start healthier and accumulate illnesses less rapidly than these controls.

To estimate savings while holding “health status” (risk scores) constant, we added the diagnosis-based prospective risk score from the prior year to the propensity score predictors used elsewhere in this paper. This propensity model provides weights for the controls that additionally adjust for observed differences in risk between treated and control patients. Detailed findings from replicating the regressions of Table 3 (but using the new weights) are in Table C-1 of the online SDC; this specification generally finds larger effects and improved statistical significance. With this model, for example, estimated savings in Years 1 and 2 grow to $286 (8.8%, p=0.06) and $318 (9.8%, p=0.11); other estimates also become larger and p-values for savings drop towards, and below, the 0.05-level for Medicare beneficiaries, inpatient care, and imaging. One concern with these analyses is that apparent differences in health status between treatment and control practices could be endogenous.24 For example, a PCMH might generate fewer nuisance visits (and illness coding) of the type that FFS billing encourages; conversely, a PCMH might proactively identify diseases that remain “hidden” in less intensively-managed patients. Due to concerns about the comparability of coding for treatment and control patients, we have highlighted the Table 3 difference-in-differences estimates which address risk without measuring it – by using each person as their own control.

DISCUSSION

We conducted many analyses, varying the sample, the duration of eligibility required for inclusion, practice assignment algorithms, fixed-versus variable-assignment rules, using and not using explicit measures of patient risk, and examining total spending versus several of its parts. While virtually all estimates of all outcomes showed savings, the amount varied considerably and almost never achieved significance at the 0.05 level.

Our most credible model (with individual fixed effects and multiple control variables in the continuously enrolled sample) suggests reductions in health care spending growth on the order of 6% to 8% and large, statistically-significant percentage reductions in emergency department (11.0%) and laboratory use (16.5%) after changing incentives for primary care providers in these newly created PCMHs.

Such reductions in total health care spending, if real, would have covered CDPHP’s one-time $35,000 stipend to encourage transformation and annual performance bonuses of up to $50,000 per physician,1 although transformation costs were subsidized by CDPHP and its implementation partners, TransforMed and Verisk Health, making full costs hard to calculate.1 Cost analyses should be revisited in a greatly expanded set of “treatment” practices.

This study has weaknesses. It describes only three self-selected practices during an initial two years of practice transformation and payment reform, with an evolving bonus system. Furthermore, even extensive modeling of limited data is no substitute for a larger sample; the very existence of savings remains a tentative finding.

Still, the apparent PCMH effects are large, and patterns of suggested savings in inpatient services and selected outpatient services are plausible. As CDPHP expands its medical home pilot, its effect on clinical quality, patient satisfaction and costs will remain of keen interest.

Supplementary Material

1

Acknowledgements

We are grateful to The Commonwealth Fund for financial support for this research, to Verisk Health, Inc. for programming support and access to preliminary risk adjustment models, and to CDPHP for allowing this empirical analysis of their data at Verisk Health. Any errors or omissions remain our own.

Financial Disclosures: Sonal Vats received research support from The Commonwealth Fund and Verisk Health through Boston University for her analysis of this data. Ash and Ellis are senior scientists at Verisk Health, Inc. where they help develop health-based predictive models; neither has any ownership or other financial relationship with Verisk Health, Inc.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Feder JL. A Health Plan Spurs Transformation Of Primary Care Practices Into Better-Paid Medical Homes. Health Affairs. 2011;30:397–399. doi: 10.1377/hlthaff.2011.0112. [DOI] [PubMed] [Google Scholar]
  • 2.Center For Medicare And Medicaid Innovations. Comprehensive Primary Care Initiative. [Accessed November 21, 2012];2011 Available at: http://www.innovations.cms.gov/initiatives/Comprehensive-Primary-Care-Initiative/index.html.
  • 3.McMullin ST, Lonergan TP, Rynearson CS, et al. Impact Of An Evidence Based Computerized Decision Support System On Primary Care Prescription Costs. Annals of Family Medicine. 2004;2:494–498. doi: 10.1370/afm.233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.McMullin ST, Lonergan TP, Rynearson CS. Twelve-Month Drug Cost Savings Related To Use Of An Electronic Prescribing System With Integrated Decision Support In Primary Care. Journal of Managed Care Pharmacy. 2005;11:322–324. doi: 10.18553/jmcp.2005.11.4.322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Patient Centered Primary Care Collaborative. Joint Principles Of The Patient Centered Medical Home. [Accessed November 21, 2012];2007 Available at: http://www.pcpcc.net/node/14. [Google Scholar]
  • 6.Sia C, Tonniges TF, Osterhus E, et al. History Of The Medical Home Concept. Pediatrics. 2004;113:1473–1478. [PubMed] [Google Scholar]
  • 7.Saultz JW, Lochner J. Interpersonal Continuity Of Care And Care Outcomes: A Critical Review. Annals of Family Medicine. 2005;3:159–166. doi: 10.1370/afm.285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Centre For Medicare And Medicaid Services. Design Of The CMS Medical Home Demonstration. [Accessed November 21, 2012];2008 Oct 3; Available at: http://www.michigan.gov/documents/mdch/shMedHome_DesignReport_346581_7.pdf.
  • 9.Grumbach K, Bodenheimer T, Grundy P. Outcomes Of Implementing Patient-Centered Medical Home Interventions: A Review Of The Evidence On Quality, Access And Costs From Recent Prospective Evaluation Studies. [Accessed November 21, 2012];2009 Aug; Available at: http://www.pcpcc.net/files/evidenceWEB%20FINAL%2010.16.09_1.pdf. [Google Scholar]
  • 10.Grumbach K, Grundy P. Outcomes Of Implementing Patient Centered Medical Home Interventions: A Review Of The Evidence From Prospective Evaluation Studies In The United States. [Accessed November 21, 2012];Patient-Centered Primary Care Collaborative. 2010 Nov 16; Available at: http://www.pcpcc.net/files/evidence_outcomes_in_pcmh.pdf.
  • 11.Reid RJ, Fishman PA, Yu O, et al. Patient-Centered Medical Home Demonstration: A Prospective, Quasi-Experimental, Before And After Evaluation. Am J Manag Care. 2009;15:e71–e87. [PubMed] [Google Scholar]
  • 12.Reid RJ, Coleman K, Johnson EA, et al. The Group Health Medical Home At Year Two: Cost Savings, Higher Patient Satisfaction, And Less Burnout For Providers. Health Aff. 2010;29:835–843. doi: 10.1377/hlthaff.2010.0158. [DOI] [PubMed] [Google Scholar]
  • 13.Gilfillan RJ, Tomcavage J, Rosenthal MB, et al. Value And The Medical Home: Effects Of Transformed Primary Care. Am J Manag Care. 2010;16:607–614. [PubMed] [Google Scholar]
  • 14.Nielsen M, Langner B, Zema C, et al. Benefits Of Implementing The Primary Care Patient-Centered Medical Home: A Review Of Cost & Quality Results, 2012. [Accessed November 1, 2012]; Available at: http://www.pcpcc.net/files/benefits_of_implementing_the_primary_care_pcmh.pdf. [Google Scholar]
  • 15.Patient-Centered Primary Care Collaborative. Pilots & Demonstrations (Self-Reported) [Accessed October, 2012]; Available at: http://www.pcpcc.net/pcpcc-pilot-projects. [Google Scholar]
  • 16.Bitton A, Martin C, Landon B. A Nationwide Survey Of Patient Centered Medical Home Demonstration Projects. Journal of General Internal Medicine. 2010;25:584–592. doi: 10.1007/s11606-010-1262-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Goroll AH, Berenson RA, Schoenbaum SC, et al. Fundamental Reform Of Payment For Adult Primary Care: Comprehensive Payment For Comprehensive Care. J Gen Intern Med. 2007;22:410–415. doi: 10.1007/s11606-006-0083-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Ash AS, Ellis RP. Risk-Adjusted Payment And Performance Assessment For Primary Care. Medical Care. 2012;50:643–653. doi: 10.1097/MLR.0b013e3182549c74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Ellis RP, Ash AS. Payments In Support Of Effective Primary Care For Chronic Conditions. Nordic Economic Policy Review. 2012;2 [Google Scholar]
  • 20.Song Z, Safran DG, Landon BE, et al. Health Care Spending And Quality In Year 1 Of The Alternative Quality Contract. N Engl J Med. 2011;365:909–918. doi: 10.1056/NEJMsa1101416. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Basu A, Polsky D, Manning W. Estimating Treatment Effects On Healthcare Costs Under Exogeneity: Is There A “Magic Bullet”? Health Serv Outcomes Res Methodol. 2011;11(1):1–26. doi: 10.1007/s10742-011-0072-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Li F, Zaslavsky AM, Landrum MB. Proceeding of Joint Statistical Meeting, Section on Health Policy Statistics. American Statistical Association; 2007. Propensity Score Analysis With Hierarchical Data; pp. 2474–2481. [Google Scholar]
  • 23.Pope GC, Kautter J, Ellis RP, et al. Risk Adjustment Of Medicare Capitation Payments Using The CMS-HCC Model. Health Care Finance Rev. 2004;25:119–141. [PMC free article] [PubMed] [Google Scholar]
  • 24.Wennberg JE, Staiger DO, Sharp SM, et al. Observational Intensity Bias Associated With Illness Adjustment: Cross Sectional Analysis Of Insurance Claims. BMJ: British Medical Journal. 2013;346 doi: 10.1136/bmj.f549. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES