Abstract
Background
Broad adoption of electronic health records (EHRs) is a potential strategy for curbing healthcare cost growth, which is particularly vital for Medicaid. Despite limited evidence for EHR-related cost savings, the 2009 HITECH Act included incentives for providers to become meaningful users of EHRs. We evaluated a large Massachusetts EHR pilot to obtain early insight into the potential for the national strategy to reduce short-run healthcare costs in the Medicaid population.
Methods
We calculated monthly ambulatory cost and visit measures from Medicaid claims data for beneficiaries receiving the majority of their care in the three Massachusetts eHealth Collaborative (MAeHC) pilot communities or in six matched control communities. Using a difference-in-differences of slope analysis, we assessed whether cost and visit trajectories differed in the pre-implementation period compared to the post-implementation period for intervention and control community members.
Results
We found evidence that EHR adoption impacted ambulatory medical cost in two of the three communities, but the effects were in opposite directions. Ambulatory medical costs increased more slowly in one intervention compared to its control communities in the pre-to-post period (difference-in-differences=-1.98%, p<0.001; PMPM savings of $41.60). In contrast, for a second pilot community, ambulatory medical cost increased more slowly in the control communities (difference-in-differences=2.56%, p=0.005; PMPM increase of $43.34).
Conclusions
As a stand-alone approach, adoption of commercially-available EHRs in community practices did not consistently impact Medicaid costs in the short-run. This suggests that future meaningful use criteria may need to specifically target cost savings and coordinate with payment reform efforts.
Keywords: Health Care Costs, Health Policy / Politics / Law / Regulation, Information Technology in Health, Medicaid
Introduction
Our nation is in the midst of an unprecedented investment in IT to support healthcare delivery. The centerpiece of the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act is $30 billion in incentives for doctors and hospitals to become “meaningful users” of electronic health records (EHRs; Blumenthal & Tavenner, 2010; DesRoches et al., 2008) The hope is that EHRs will enable better management of information in order to improve the quality and reduce the cost of health care. While there is an array of studies that points to the quality benefits enabled by EHRs, we have little empirical data on how EHRs impact healthcare costs (Chaudhry et al., 2006; Buntin, Burke, Hoaglan, & Blumenthal, 2011). There have been several attempts to use cost-benefit models to quantify potential cost savings, which are assumed to derive from efficiencies such as reductions in redundant testing. These estimates range widely, from approximately $3 billion to $80 billion in annual savings, in part because they rely on assumptions for which there are sparse supporting data (Hillestad et al., 2005; Keehan et al., 2011; Congressional Budget Office, 2008) Estimates of potential savings also rely on the assumption that EHRs will be used to reduce inefficiencies and waste where they exist. However, EHRs could be used in ways that increase cost, for example by improving charge capture (Cheriff, Kapur, Qiu, & Cole, 2010) or increasing diagnostic testing (McCormick, Bor, Woolhandler, & Himmelstein, 2012).
It is therefore critical to empirically evaluate the impact of commonly-available EHRs on utilization and cost in the community setting, because this reflects the experience of the majority of providers in the U.S. (Burt & Sisk, 2005). It is particularly important and informative to assess the impact among Medicaid beneficiaries, because this population has much to gain or lose. Since Medicaid beneficiaries have a disproportionate share of complex, uncoordinated care, they should benefit from improved information management enabled by EHRs. If, however, EHRs fail to reduce healthcare costs or in fact increase them, this could pose a serious threat to already strained Medicaid budgets.
We therefore conducted a study to determine whether one of the first large scale deployments of community-wide ambulatory EHRs in the country impacted healthcare costs for Medicaid beneficiaries in those communities. We compared ambulatory costs and the number of visits for beneficiaries—who received the majority of their care from providers who adopted EHRs in the pilot communities—to beneficiaries in matched control communities that applied, but were not selected, to be part of the pilot. We used a difference-in-differences of slope approach, assessing whether the intervention changed the trajectory of costs and visits from a 9-month pre-adoption period to an 18-month post-adoption period compared to control communities. Since intervention providers used EHRs in ways akin to the initial meaningful use criteria, our results may offer early insight into how the national strategy will impact short-run healthcare costs in the Medicaid population.
Methods
Setting and Intervention
In 2004, the Massachusetts eHealth Collaborative (MAeHC) was established to oversee a large scale pilot of EHRs in the ambulatory setting to assess their costs and benefits (Mostashari, Tripathi, & Kendall, 2009). Any Massachusetts community was eligible to apply to serve as a pilot site, and three communities were selected from the more than 30 communities that applied. Communities were purposively chosen to provide heterogeneous populations for study. Each community evaluated and approved up to four commercially-available EHR systems from which participating practices could choose. Functionality varied by system and practice, but most providers used EHRs in ways that were consistent with the priorities of stage 1 meaningful use, capturing core clinical data and entering medication orders electronically (Blumenthal & Tavenner, 2010; National Center for Vital and Health Statistics, 2009; Appendix Exhibit A-1). System costs and implementation support were almost fully covered by the MAeHC.
Control Community Selection and Claims Data
From the pool of applicant communities we matched two control communities to each pilot community. Selecting controls from this pool ensured a shared interest in EHR adoption and associated unobserved characteristics. Matching was performed using a modified cluster analysis based on an array of community characteristics (see Technical Appendix for details). To help ensure our ability to detect an intervention effect, we drew on statewide EHR adoption data to confirm that control communities did not have high baseline levels of EHR adoption (Simon et al., 2007).
We then compiled a list of National Provider Identifiers (NPIs) for all providers and facilities with an NPI-associated business practice address in an intervention (n=3) or control (n=6) community zip code. MassHealth provided complete ambulatory medical claims histories for the analytic period (June 2005 to June 2009) for all beneficiaries with either an ambulatory visit to a provider on the NPI list, a designated primary care provider on the NPI list, or a home address in a relevant ZIP Code. To identify the subset of beneficiaries who had regular contact with intervention providers, and a similar subset of control community beneficiaries, we implemented a second round of matching, described in the following paragraph.
Provider & Beneficiary Selection
Not all providers participated in the pilot in each intervention community, and participation did not adhere to a set of criteria that could be replicated in control communities. We therefore inferred provider selection criteria for each intervention community using three sources: (1) the MassHealth provider directory that listed provider specializations, (2) the NPI database that reported gender and proprietorship status, and (3) the FOLIO database that contained additional specialty data and organizational affiliations (e.g., HMOs). We first eliminated providers from control communities based on criteria that absolutely distinguished adopters from non-adopters in the intervention community. Second, we identified provider characteristics that disproportionately characterized adopters from non-adopters in the intervention community, and used logistic regression to generate a probability score for adoption. Probability scores were then calculated for control providers and we selected control providers with the highest probability scores—taking the same percent of providers from control communities as were adopters in the matched intervention community.
Using these provider assignments, we then assigned beneficiaries to one of the nine communities if they received the majority of their ambulatory care from one or more providers in a given community. We defined majority of ambulatory care as greater than 50% of ambulatory spending (i.e., dollar amount) and greater than 50% of ambulatory claims (i.e., number of claims, regardless of amount) (Steinwachs et al., 1998; Menec, Black, Roos, Bogdanovic, & Reid, 2000; see Technical Appendix for additional detail). Exhibit 1 reports key beneficiary demographics for each intervention community and its matched control communities.
Exhibit 1. Beneficiary Characteristics.
| Intervention | Control | |
|---|---|---|
| GROUP 1 | ||
| Gender (Male) | 58% | 58% |
| Mean Age | 37.9 | 42.2 |
| One or more comorbidities | 51% | 53% |
| Plan Type | ||
| Fee for Service | 36% | 38% |
| Managed Care | 11% | 10% |
| Primary Care Clinician | 52% | 51% |
| GROUP 2 | ||
| Gender (Male) | 56% | 52% |
| Mean Age | 37.7 | 39.5 |
| One or more comorbidities | 52% | 49% |
| Plan Type | ||
| Fee for Service | 38% | 39% |
| Managed Care | 15% | 12% |
| Primary Care Clinician | 46% | 49% |
| GROUP 3 | ||
| Gender (Male) | 53% | 51% |
| Mean Age | 38.1 | 33.9 |
| One or more comorbidities | 47% | 38% |
| Plan Type | ||
| Fee for Service | 36% | 37% |
| Managed Care | 17% | 16% |
| Primary Care Clinician | 47% | 47% |
SOURCE: Authors' analysis.
Outcome Measures
While we assigned beneficiaries to an intervention or control community based on where they received the majority of their care, we sought to examine the impact of the pilot on all ambulatory medical costs and visits for these beneficiaries, whether these were charged by selected providers or by other providers treating the beneficiary. This approach captures spillover effects that may be realized if a cohort of providers simultaneously adopts EHRs and likely mirrors the early experience under meaningful use in which beneficiaries will receive care from multiple providers, only some of whom will have adopted EHRs.
Therefore, for each beneficiary who received a community assignment, we used their complete ambulatory claims to create monthly cost and utilization measures that we hypothesized would be impacted by EHR adoption. We began by assessing ambulatory medical cost and then examined two components of ambulatory medical cost for which there is emerging evidence that EHRs may be influential (McCormick et al., 2012): laboratory cost and radiology cost. We also assessed ambulatory visits as well as the subset of ambulatory visits for evaluation and management that have been shown to be sensitive to EHR use (Garrido, Jamieson, Zhou, Wiesenthal, & Liang, 2005). Cost measures relied on standardized costs in order to capture changes in utilization, and not changes in reimbursement rates that could vary by community. All measures were calculated on a per member per month basis (PMPM) for months in which the beneficiary was insured for the entire month. We include summary statistics and additional detail on our measures in the Technical Appendix.
Analytic Approach & Models
We used a longitudinal linear regression model to assess changes over time for each outcome measure. Using monthly beneficiary-level observations to establish a baseline trend in the pre-implementation period (June 2005–February 2006), the model captures changes in trend during the (1) implementation period (March 2006–December 2007) and (2) post-implementation period after all practices had EHRs in place (January 2008–June 2009). For each triad of intervention and two matched control communities (referred to as “group 1,” “group 2,” and “group 3”) we assessed whether trend changes in the pre-to-post period, excluding the implementation period, were significantly different for intervention compared to pooled control community members (i.e., a difference-in-differences of slope approach). This approach helps to ensure that unobserved variables that remain constant over time will not bias the estimated treatment effect, and does not require either that intervention and control beneficiaries start at the same average cost or have the same slope during the pre-intervention period. The test of savings from the pilot is whether the slope decreases more (or does not increase as quickly) for intervention beneficiaries compared to control beneficiaries.
Due to extreme values in our outcome measures that skewed the distributions, we log-transformed the outcomes. The primary predictors in the model were the effect of time in the pre-implementation period, the effect of time in the implementation period, and the effect of time in the post-implementation period. To ensure that differences in case-mix between intervention and control communities, as well as changes in case-mix over time, did not confound the analyses, we included variables that identified whether a beneficiary had each comorbidity included in the Charlson Index. We also included beneficiary level covariates to adjust for changes in mix of age, gender, and type of coverage (e.g., HMO). We used a mixed model that enabled us to include random effects to adjust for correlation in beneficiary utilization patterns over time. Finally, we adjusted for seasonal trends.
We exponentiated the coefficients from the log-cost models in order to interpret the results as the percentage change in costs or visits per month. We report the pre- and post-period slopes for intervention and control communities as well as the pre-to-post difference-in-differences of slope with 95% confidence intervals. We also project the financial impact for key results. To do this, we first calculated the average PMPM cost in the intervention community in the pre-period. We then projected the cost per beneficiary for an 18 month period (the duration of the post-period) under two scenarios: (1) cost increases based on the experience in the intervention community, and (2) cost increases based on the experience in the control communities. The difference between (1) and (2) reflects the financial impact of the intervention.
Results
We began by assessing the impact of the pilot on total ambulatory medical cost. In all intervention and control communities, ambulatory medical costs increased more slowly in the post period compared to the pre period. However, when we analyzed the difference-in-differences of slope, in two of the three intervention communities, ambulatory medical costs increased more slowly compared to control communities. In group 2, the pre-to-post difference in slope in the intervention community was -3.06 percentage points compared to -0.95 percentage points in the control communities, suggesting an intervention effect of -1.98 percentage points (p<0.001; Exhibit 2). In group 3, the magnitude of the effect was smaller and did not achieve statistical significance (difference-in-differences of -0.91; p=0.34). In group 1, we found the opposite effect—ambulatory medical cost increased even more slowly in the control communities (difference-in-differences of 2.56 percentage points; p=0.005).
Exhibit 2. Summarized Longitudinal Model Results: Cost and Utilization Trajectories in the Pre- and Post-Implementation Periods.
| Group 1 | Group 2 | Group 3 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Monthly Pre-Period Trend | Monthly Post-Period Trend | Diff.-in-Diff. [95% CI] | Pre | Post | Diff.-in-Diff. [95% CI] | Pre | Post | Diff-in-Diff [95% CI] | ||
| COSTS | ||||||||||
| Ambulatory Medical Cost: Total | ||||||||||
| Intervention | 4.39% | 3.48% | 2.56** | 6.17% | 3.11% | -1.98*** | 3.74% | 1.07% | -0.91 | |
| Controls | 6.58% | 3.01% | [0.76,4.40] | 3.55% | 2.59% | [-3.08,-0.86] | 2.83% | 1.10% | [-2.75,0.96] | |
| Ambulatory Medical Cost: Laboratory | ||||||||||
| Intervention | 0.20% | 0.43% | 0.66 | 0.23% | 0.45% | 0.10 | -0.06% | 0.21% | 0.18 | |
| Controls | 0.63% | 0.19% | [-0.08,1.41] | 0.11% | 0.23% | [-0.35,0.55] | 0.02% | 0.11% | [-0.41,0.78] | |
| Ambulatory Medical Cost: Radiology | ||||||||||
| Intervention | 1.24% | 1.23% | 0.12 | 1.04% | 1.11% | -0.54 | 1.52% | 0.50% | -1.63* | |
| Controls | 0.97% | 0.84% | [-1.11,1.36] | 0.13% | 0.74% | [-1.31,0.24] | -0.03% | 0.60% | [-2.88,-0.36] | |
| VISITS | ||||||||||
| Ambulatory Visits | ||||||||||
| Intervention | 0.32% | 0.79% | 0.84** | 1.06% | 0.66% | -0.60** | 0.62% | 0.10% | -0.18 | |
| Controls | 0.93% | 0.55% | [0.25,1.44] | 0.34% | 0.54% | [-0.97,-0.22] | 0.48% | 0.14% | [-0.78,0.42] | |
| Evaluation and Management Visits | ||||||||||
| Intervention | 0.18% | 0.79% | 0.51* | 0.73% | 0.58% | -0.49** | 0.41% | 0.11% | -0.30 | |
| Controls | 0.35% | 0.45% | [0.03,1.00] | 0.15% | 0.48% | [-0.80,-0.18] | 0.16% | 0.17% | [-0.80,0.19] | |
p<0.05,
p<0.01,
p<0.001
NOTE. Negative difference-in-differences reflect savings in intervention relative to control communities.
SOURCE: Authors' analysis.
To better understand what may be driving the ambulatory medical cost results, we first examined the two components most widely expected to be impacted by EHR use: laboratory and radiology cost. We did not find evidence in any of the three groups that the trajectory of laboratory costs differed for intervention compared to control communities in the pre-to-post period (Exhibit 2). Similarly, we did not find evidence that the trajectory of radiology costs drove observed differences in ambulatory medical costs. We did, however, find that in group 3 the intervention was associated with radiology savings. The pre-to-post difference in cost trajectory for the intervention community was -1.02% compared to a change of 0.63% among controls (difference-in-differences of -1.64 percentage points; p=0.012). However, radiology costs were such a small fraction of total ambulatory cost that this result did not produce a statistically significant difference-in-differences for total ambulatory cost.
We found more compelling evidence that observed changes in ambulatory costs were driven, at least in part, by changes in visit rates. For ambulatory visits, we found a positive difference-in-differences in group 1 (diff-in-diff=0.84, p=0.005) and negative difference-in-differences in groups 2 and 3 (diff-in-diff=-0.60, p=0.002, and diff-in-diff=-0.18, p=0.55, respectively; Exhibit 2). Results for evaluation and management visits were similar.
When we projected the financial impact for the results in which the pilot was significantly associated with changes in ambulatory medical cost, we found that the potential cost savings in group 2 were almost equal to the potential cost increases in group 1 (Exhibit 3). We estimated PMPM savings of $41.60 in group 2 and PMPM cost increases of $43.34 in group 1, representing approximately 30% of mean PMPM ambulatory costs.
Exhibit 3. Financial Impact Projection.
| Ambulatory Medical Cost Group 1 | Ambulatory Medical Cost Group 2 | |
|---|---|---|
| Pre-period mean PMPM cost in intervention community | $133.70 | $128.82 |
| After 18 months | ||
| (1) of “post” period increase (intervention community) | $3,383.75 | $3,140.59 |
| (2) of “post” period increase (control community) | $2,603.58 | $3,889.31 |
| Savings (Cost increase) per beneficiary (18 months) (2)—(1) | ($780.17) | $748.71 |
| Savings (Cost increase) per beneficiary per month | ($43.34) | $41.60 |
| Percent savings or increase based on mean PMPM ambulatory medical cost | -29% | 28% |
SOURCE: Authors' analysis.
Discussion
We assessed whether one of the largest pilots of EHR adoption in the community setting impacted ambulatory medical costs and visits among Medicaid recipients in the 18 months after adoption. We found evidence that EHRs may impact ambulatory medical costs, driven at least in part by changes in visits, but the direction of the effect was not consistent across communities and the net effect was minimal. This suggests that EHRs, in and of themselves, can facilitate either increases or decreases in cost, and this likely depends on how they are used and the context in which they are used.
To the best of our knowledge, this is the first study to examine the question of the impact of EHRs on healthcare costs specifically in the Medicaid population. This is an important population to study for several reasons. First, Medicaid recipients have particularly complex care needs and, therefore, improved information management that EHRs enable may result in particularly notable outcomes compared with commercially insured populations. While the magnitude of the impact on ambulatory medical cost was large, approximately 30%, the fact that we found an increase in cost of this magnitude in one pilot community and a decrease in cost of this magnitude in another pilot community points to other factors that determine when EHR adoption leads to cost savings. Second, the Medicaid program has historically faced issues with access to care, which emphasizes the need to understand how the impact on costs and visits that we observed relates to other outcomes. Since it is possible that reduced costs and visits are an indicator of worse care, we hope that future work will be able to examine the impact of EHRs on access and quality alongside costs in the Medicaid population. Finally, Medicaid is one of the government programs investing heavily in EHRs through designing and administering meaningful use incentives for providers who predominantly treat Medicaid patients. The Medicaid program, therefore, not only has a particular interest in ensuring that the program is successful, but it has the ability to shape it going forward. Our results suggest that state Medicaid programs may need to structure meaningful use in a way that specifically encourages providers to use EHRs to save money by reducing inappropriate utilization.
Our differential findings across the three communities mirror the conflicting evidence about the impact of EHRs on healthcare utilization and associated costs (Amarasingham, Plantinga, Diener-West, Gaskin, & Powe, 2009; DesRoches et al., 2010; Himmelstein, Wright, & Woolhandler, 2010; McCullough, Casey, Moscovice, & Prasad, 2010; McCormick et al., 2012). The literature also suggests potential mechanisms that may explain the differential findings across communities. For example, the Kaiser Permanente system has reported success in using their EHR to substantially decrease ambulatory visits, shifting many to phone-based encounters (Liang, 2010). Since Kaiser has financial incentives aligned to promote this outcome, communities with a greater proportion of capitated care may be more likely to use EHRs in ways that result in savings. Second, the majority of studies tying EHR adoption to cost savings come from large delivery systems that have focused on implementing EHRs with robust decision support (Garg et al., 2005; Chaudhry et al., 2006). Therefore, communities with a greater proportion of large practices, which are more likely to have both the managerial and technical skills to promote widespread use of decision support, may realize greater savings from EHR adoption. Third, much of the cost savings from EHRs is projected to derive from health information exchange (HIE) in which systems are connected and data can electronically follow patients between delivery settings (Walker et al., 2005). Communities with more robust HIE may, therefore, realize greater savings compared to those in which providers are using EHRs in silos.
While we were not able to evaluate these mechanisms in our study, when combined with our differential results across communities, they suggest that for the meaningful use program to consistently drive cost savings, criteria specifically targeting such savings will likely be needed. Based on the literature, the two most promising domains are clinical decision support and health information exchange. Since the recently released Stage 2 meaningful use criteria continue to be relatively light in these areas (Centers for Medicare & Medicaid Services, 2012), it will likely be important to ask more of providers. The HIT Policy Committee has foreshadowed this with discussions about making the Stage 3 criteria focus more explicitly on cost savings. Perhaps more importantly, meaningful use incentives are likely to have a substantially greater impact if they are coordinated with payment reform efforts, such as Accountable Care Organizations (Buntin, Jain, & Blumenthal, 2010).
There are important limitations that should be considered when interpreting our results. Perhaps most significant is whether control community beneficiaries were well-matched to intervention community beneficiaries. We employed several strategies to ensure this: (1) limiting control communities to those that applied to be pilot sites, (2) matching control and intervention communities on a broad range of characteristics, (3) narrowing providers within control communities to those with characteristics similar to intervention providers, and (4) adjusting for differences in beneficiary demographics. In addition, our difference-in-differences design removes biases introduced by secular trends or underlying, persistent differences between intervention and control groups. However, there could be temporal trends that disproportionately affect utilization in an intervention or control community that we were unable to address.
Our results capture the average effect of EHR adoption in the community and do not account for differences in EHR functionalities, such as what decision support was in place. In addition, the study only evaluates the effect of the pilot for beneficiaries with at least half their care in an intervention community in the 18-month period following the last implementation date, which may not capture the full effect of EHRs. Finally, it is possible that control communities made significant headway on EHR adoption on their own after they were not selected to be part of the pilot. If this occurred, we would be limited in our ability to detect an intervention effect.
In summary, we used claims data from the Massachusetts Medicaid program to assess whether ambulatory electronic health record adoption is associated with a change in ambulatory costs and visits. We examined whether the trajectory of outcomes for beneficiaries who received the majority of their ambulatory care in the three MAeHC pilot communities differed from beneficiaries in matched control communities over a four year period. We found evidence to suggest that the pilot may have impacted ambulatory medical cost, driven at least in part by changes in visits, but the direction of the effect was not consistent across communities. This may be explained by differences in financial incentives and the use of decision support and health information exchange, which available evidence suggests deliver most of the financial benefit from EHRs. If this is the case, more robust meaningful use criteria in these domains, as well as broader efforts to incentivize reductions in healthcare costs, will likely be essential if the EHR adoption resulting from the recent federal initiatives is to produce cost savings in the Medicaid population.
Supplementary Material
Appendix
Exhibit A-1. MAeHC Pilot Provider Self-Reported EHR Usage (2009).
| EHR Usage Measure | Percent reporting use “most or all of the time” | Related Stage 1 Meaningful Use Measure |
|---|---|---|
| Electronic problem list | 65% | Maintain up-to-date problem list of current and active diagnoses |
| Electronic medication lists of what each patient takes | 80% | Maintain active medication list |
| Document allergies in EHR | 94% | Maintain active medication allergy list |
| Transmit prescriptions to pharmacy electronically or via FAX | 76% | Generate and transmit permissible prescriptions electronically |
| Generate medication prescriptions: Computerized (with or without decision support) | 81% | Computer provider order entry (CPOE) for medication orders |
| Generate medication prescriptions: Computerized, with decision support (e.g., drug interaction/allergy alerts) | 60% | Implement drug–drug and drug–allergy interaction checks |
| Laboratory tests results | 78% | N/A |
| Radiology tests results | 74% | N/A |
SOURCE: Authors' analysis.
Footnotes
Disclosure: Funded by the Massachusetts eHealth Collaborative.
References
- Amarasingham R, Plantinga L, Diener-West M, Gaskin DJ, Powe NR. Clinical Information Technologies and Inpatient Outcomes: A Multiple Hospital Study. Archives of Internal Medicine. 2009;169(2):108–114. doi: 10.1001/archinternmed.2008.520. [DOI] [PubMed] [Google Scholar]
- Blumenthal D, Tavenner M. The “Meaningful Use” Regulation for Electronic Health Records. The New England Journal of Medicine. 2010;363:501–504. doi: 10.1056/NEJMp1006114. [DOI] [PubMed] [Google Scholar]
- Buntin MB, Burke MF, Hoaglan MC, Blumenthal D. The Benefits Of Health Information Technology: A Review Of The Recent Literature Shows Predominantly Positive Results. Health Affairs. 2011;30(3):464–471. doi: 10.1377/hlthaff.2011.0178. [DOI] [PubMed] [Google Scholar]
- Buntin MB, Jain SH, Blumenthal D. Health Information Technology: Laying The Infrastructure For National Health Reform. Health Affairs. 2010;29(6):1214–1219. doi: 10.1377/hlthaff.2010.0503. [DOI] [PubMed] [Google Scholar]
- Burt CW, Sisk JE. Which Physicians And Practices Are Using Electronic Medical Records? Health Affairs. 2005;24(5):1334–1343. doi: 10.1377/hlthaff.24.5.1334. [DOI] [PubMed] [Google Scholar]
- Centers for Medicare & Medicaid Services. Stage 2 Eligible Professional (EP) Meaningful Use Core and Menu Measures. 2012 Retrieved from http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/Stage2_MeaningfulUseSpecSheet_TableContents_EPs.pdf.
- Congressional Budget Office. Estimates of the effect on federal direct spending and revenues of the Health Information Technology for Economic and Clinical Health (HITECH) Act. 2008 Retrieved from http://www.cbo.gov/sites/default/files/cbofiles/ftpdocs/99xx/doc9966/hitechrangelltr.pdf.
- Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Shekelle PG. Systematic Review: Impact of Health Information Technology on Quality, Efficiency, and Costs of Medical Care. Annals of Internal Medicine. 2006;144(10):742–752. doi: 10.7326/0003-4819-144-10-200605160-00125. [DOI] [PubMed] [Google Scholar]
- Cheriff AD, Kapur AG, Qiu M, Cole CL. Physician productivity and the ambulatory EHR in a large academic multi-specialty physician group. International Journal of Medical Informatics. 2010;79(7):492–500. doi: 10.1016/j.ijmedinf.2010.04.006. [DOI] [PubMed] [Google Scholar]
- DesRoches CM, Campbell EG, Rao SR, Donelan K, Ferris TG, Jha A, Blumenthal D. Electronic Health Records in Ambulatory Care—A National Survey of Physicians. The New England Journal of Medicine. 2008;359(1):50–60. doi: 10.1056/NEJMsa0802005. [DOI] [PubMed] [Google Scholar]
- DesRoches CM, Campbell EG, Vogeli C, Zheng J, Rao SR, Shields AE, Jha AK. Electronic Health Records' Limited Successes Suggest More Targeted Uses. Health Affairs. 2010;29(4):639–646. doi: 10.1377/hlthaff.2009.1086. [DOI] [PubMed] [Google Scholar]
- Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, Haynes RB. Effects of Computerized Clinical Decision Support Systems on Practitioner Performance and Patient Outcomes: A Systematic Review. Journal of the American Medical Association. 2005;293(10):1223–1238. doi: 10.1001/jama.293.10.1223. [DOI] [PubMed] [Google Scholar]
- Garrido T, Jamieson L, Zhou Y, Wiesenthal A, Liang L. Effect of electronic health records in ambulatory care: retrospective, serial, cross sectional study. BMJ (Clinical Research Ed) 2005;330(7491):581. doi: 10.1136/bmj.330.7491.581. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hillestad R, Bigelow J, Bower A, Girosi F, Meili R, Scoville R. Can Electronic Medical Record Systems Transform Health Care? Potential Health Benefits, Savings, And Costs. Health Affairs. 2005;24(5):1103–1117. doi: 10.1377/hlthaff.24.5.1103. [DOI] [PubMed] [Google Scholar]
- Himmelstein DU, Wright A, Woolhandler S. Hospital Computing and the Costs and Quality of Care: A National Study. The American Journal of Medicine. 2010;123(1):40–46. doi: 10.1016/j.amjmed.2009.09.004. [DOI] [PubMed] [Google Scholar]
- Keehan SP, Sisko AM, Truffer CJ, Poisal JA, Cuckler GA, Madison AJ, Smith SD. National Health Spending Projections Through 2020: Economic Recovery And Reform Drive Faster Spending Growth. Health Affairs. 2011;30(8):1594–1605. doi: 10.1377/hlthaff.2011.0662. [DOI] [PubMed] [Google Scholar]
- Liang L, editor. Connected for Health: Using Electronic Health Records to Transform Care Delivery. San Francisco: Jossey-Bass; 2010. [Google Scholar]
- McCormick D, Bor DH, Woolhandler S, Himmelstein DU. Giving Office-Based Physicians Electronic Access To Patients' Prior Imaging And Lab Results Did Not Deter Ordering Of Tests. Health Affairs. 2012;31(3):488–496. doi: 10.1377/hlthaff.2011.0876. [DOI] [PubMed] [Google Scholar]
- McCullough JS, Casey M, Moscovice I, Prasad S. The Effect Of Health Information Technology On Quality In U.S. Hospitals. Health Affairs. 2010;29(4):647–654. doi: 10.1377/hlthaff.2010.0155. [DOI] [PubMed] [Google Scholar]
- Menec V, Black C, Roos NP, Bogdanovic B, Reid R. Defining practice populations for primary care: methods and issues. Manitoba, Canada: The Manitoba Centre for Health Policy and Evaluation, University of Manitoba; 2000. [Google Scholar]
- Mostashari F, Tripathi M, Kendall M. A Tale Of Two Large Community Electronic Health Record Extension Projects. Health Affairs. 2009;28(2):345–356. doi: 10.1377/hlthaff.28.2.345. [DOI] [PubMed] [Google Scholar]
- National Committee on Vital and Health Statistics Hearings. Measuring Meaningful Use. 2009 Retrieved from http://www.ncvhs.hhs.gov/090429p10c.pdf.
- Simon SR, Kaushal R, Cleary PD, Jenter CA, Volk LA, Orav EJ, Bates EW. Physicians and Electronic Health Records: A Statewide Survey. Archives of Internal Medicine. 2007;167(5):507–512. doi: 10.1001/archinte.167.5.507. [DOI] [PubMed] [Google Scholar]
- Steinwachs DM, Stuart ME, Scholle S, Starfield B, Fox MH, Weiner JP. A Comparison of Ambulatory Medicaid Claims to Medical Records: A Reliabiligy Assessment. American Journal of Medical Quality. 1998;13(2):63–69. doi: 10.1177/106286069801300203. [DOI] [PubMed] [Google Scholar]
- Walker J, Pan E, Johnston D, Adler-Milstein J, Bates DW, Middleton D. The Value of Health Care Information Exchange And Interoperability. Health Affairs. 2005 doi: 10.1377/hlthaff.w5.10. Retrieved from http://content.healthaffairs.org/content/early/2005/01/19/hlthaff.w5.10.full.pdf+html. [DOI] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
