Abstract
Background:
Federal incentives for electronic health record (EHR) use typically require quality measure reporting over calendar year or 90-day periods. However, required reporting periods may not align with time frames of real-world quality improvement (QI) efforts. This study described primary care practices’ ability to obtain measures with reporting periods aligning with a large QI initiative.
Methods:
Researchers conducted a substudy of a randomized trial testing practice facilitation strategies for preventive cardiovascular care. Three quality measures (aspirin for ischemic vascular disease; blood pressure control for hypertension; smoking screening/cessation) were collected quarterly over one year. The primary outcome was a binary indicator of whether a practice facilitator obtained all three measures with “rolling 12-month” reporting periods (that is, the year preceding each study quarter).
Results:
The study included 107 practices, 63 (58.9%) of which met the primary outcome of obtaining all measures with rolling 12-month reporting periods. Smaller practices were less likely to meet the primary outcome (p < 0.001). Practices used 11 different EHRs, 3 of which were unable to consistently produce rolling 12-month measures; at 33 practices (30.8%) using these 3 EHRs, facilitators met a secondary outcome of obtaining prior calendar year and rolling 3-month measures. Facilitators reported barriers to data collection such as practices lacking optional EHR features, and EHRs’ inability to produce reporting periods across two calendar years.
Conclusion:
EHR vendors’ compliance with federal reporting requirements is not necessarily sufficient to support real-world QI work. Improvements are needed in the flexibility and usability of EHRs’ quality measurement functions, particularly for smaller practices.
Part of the motivation behind federal incentives for electronic health record (EHR) use is the assumption that collecting and measuring clinical performance from EHR data can lead to improved quality of care.1,2 However, EHR–based quality measurement does not necessarily aid quality improvement (QI) activities. Some practices lack the technical sophistication to effectively use their EHRs,3 while others elect to manually review charts for quality reporting due to a lack of trust in EHR–generated quality metrics.4 In addition, organizations sometimes need to customize EHR reporting features5 or pursue customized software solutions6,7 to obtain reliable and useful performance results for QI.
The time frame (reporting period) over which quality measures are calculated is particularly relevant to individual QI projects with distinct time lines. Federal incentive programs typically require clinicians—and EHR vendors—to generate quality measures over a calendar year or continuous 90-day period,8,9 but these time frames may be insufficient for measures subject to seasonal variation (such as immunizations) or omit patients who receive care infrequently. In addition, except for instances when health care organizations’ QI initiatives are implemented during 90-day periods or full calendar years, the time frames of most QI initiatives will not necessarily align with measurement periods for federal incentive programs. To our knowledge, no prior quantitative analyses have examined the extent to which EHRs can generate quality measures with reporting periods aligning with real-world QI efforts.
We undertook the current study to address this gap in the evidence. Our primary objective was to describe rates at which primary care practices with different EHRs obtained quality reports with reporting periods aligning with the time frame of a large QI initiative. Our secondary objective was to describe barriers to data collection in instances when requested quality reports were not obtained.
METHODS
The present analysis is a substudy of data from the Healthy Hearts in the Heartland (H3) trial, a comparative effectiveness trial of practice facilitation strategies for preventive cardiovascular care that was part of the Agency for Healthcare Research and Quality’s EvidenceNOW initiative.10 In 2016 our study team recruited and randomized four successive waves of primary care practices from Illinois, Indiana, and Wisconsin. Practices were randomized to one of two practice facilitation approaches: (1) point-of-care (POC) strategies focused on improving care delivered during office visits, or (2) POC plus population management strategies extending to patients who do not present for care or activities outside of routine office visits. Both facilitation approaches involved 12 months of practice facilitator support to implement evidence-based QI strategies.
Practice facilitators brought a range of skills and experiences to the current project. In a survey administered to the 16 facilitators employed by the project before QI activities began (at its height, the project eventually employed 17 facilitators), 11 (68.8%) reported prior practice facilitation experience, with 8 (50.0%) reporting at least three years of facilitation experience. Six (37.5%) facilitators reported a background in QI work, 10 (62.5%) reported a background in information technology, and 7 (43.8%) reported a clinical background (responses not mutually exclusive). Before and during the period of active QI work with practices, facilitators held biweekly calls, attended quarterly in-person meetings, and used an e-mail listserv to discuss evidence-based facilitation strategies and to share best practices.
During the study, practice facilitators obtained quality measure reports for each practice, either via the practice’s EHR (primary approach) or custom reports from external vendors (practices typically contract with these external vendors for data processing that supports quality reporting submissions and QI work). Facilitators were tasked with obtaining reports for three common quality measures: (1) aspirin therapy for ischemic vascular disease; (2) blood pressure control for hypertension, and; (3) smoking screening and cessation. Facilitators were asked to obtain reports at trial baseline and after each intervention quarter.
Preferred Reporting Periods
Facilitators aimed to obtain measures with “rolling 12-month” reporting periods for the year immediately preceding each study quarter, with each measure denominator drawn from groups of patients who met eligibility criteria for the measure and had at least one visit to the practice during the measurement period. As depicted in Figure 1a, for the first wave of randomized practices (which initiated study-related QI activities in February 2016) the initial rolling 12-month reporting period spanned from February 2015 to January 2016. We used rolling 12-month reporting periods for several reasons, including their compatibility with our 12-month intervention period, ability to account for variation in study participation time lines over successive randomization waves with different start dates, avoidance of artifactual differences in measured quality due to differences in which calendar quarter a measure was applied (for example, more patients may make a measure-qualifying visit during cold and flu season than at other times of the year but be less likely to have their preventive cardiology needs addressed during an acute visit), and alignment with the measurement approach of other regional EvidenceNOW collaboratives.11 A study tracking database housed data on practice characteristics, and each practice completed a baseline survey.
Figure 1.

(a) This chart illustrates the preferred rolling 12-month reporting periods for the first wave of participating practices. (b) This chart illustrates the alternate rolling 3-month reporting periods for the first wave of participating practices.
Alternate Reporting Periods and Barriers to Data Collection
Beginning at study baseline, facilitators frequently reported that four commercial EHRs were unable to produce any of the three quality measures with rolling 12-month reporting periods. We validated these reported limitations by consulting with multiple sources, including facilitators with expertise on individual EHRs, colleagues from another regional cooperative in the EvidenceNOW initiative, and EHR vendors. One of the four EHR vendors agreed to create customized reports with rolling 12-month reporting periods; for the other three EHRs, we instructed facilitators to obtain quality measures with two alternate reporting periods: both the prior calendar year and most recent 3-month period (that is, rolling 3-month data, which are visually depicted in Figure 1b).
Alternate reporting periods were not requested or accepted from practices using the eight EHRs that we expected to produce rolling 12-month measures. If a facilitator could not obtain measures with the requested reporting period (prior calendar year plus rolling 3 months for three EHRs described above; rolling 12 months for other EHRs), they were asked to complete a structured form describing barriers they encountered during data collection.
Inclusion Criteria
The current study’s practice-level analysis included practices from the trial’s first, second, and third randomization waves. We excluded practices from the trial’s fourth randomization wave, which were in two large health systems that did not use built-in EHR functionalities to generate quality measures. Unlike practices in the first three randomization waves, these two health systems exported diagnosis and encounter data from their EHRs, which were then used in quality measure calculations.12,13 Because these health systems did not even attempt to use EHR quality reporting functionalities for outcome measurement, they were excluded from the current assessment of primary care practices’ ability to generate quality reports with our preferred reporting period.
Included practices were required to use an EHR and report that their EHR was certified to meet federal Meaningful Use requirements. Although 149 practices were recruited for the first three waves of the H3 comparative effectiveness trial, not all of these practices actually engaged with practice facilitators for QI work and related data collection activities, which precluded facilitators from even attempting to obtain quality reports. As such, practices that did not engage with facilitators were excluded. Practices were also excluded if a facilitator reported insufficient access to generate quality reports from the practice’s EHR. Due to our desire to stratify results by EHR, we excluded practices if their EHR was used at fewer than 3 included practices.
Outcomes and Analysis
The primary outcome was a binary measure of whether a facilitator obtained all preferred quality measures for any quarterly data submission. This outcome was met if, for an individual quarterly data submission, all three quality measures of interest were produced for the preferred rolling 12-month reporting period. A secondary binary outcome was calculated for the subgroup of practices using EHRs deemed unable to produce rolling 12-month reporting periods. Among eligible practices, this secondary outcome was an indicator of a facilitator’s ability to obtain all three quality measures with alternate reporting periods—that is, both prior calendar year and rolling 3 months—for any quarterly submission.
Descriptive statistics were calculated for practice characteristics. Wilcoxon rank-sum tests tested for potential associations between ordinal practice size and (1) use of one of the eight EHRs that we expected to produce rolling 12-month measures and (2) achievement of the primary outcome.
In outcome calculations, practices were stratified by EHR and grouped into three mutually exclusive categories: (1) primary outcome met (which precluded inclusion in secondary outcome calculations), (2) secondary outcome met, or (3) neither outcome met. Because we sought to demonstrate which data experienced facilitators could obtain—rather than make definitive conclusions about EHR capabilities—we elected to de-identify EHRs. Facilitators’ reported barriers were totaled without consideration of EHR.
RESULTS
A total of 107 practices met all inclusion criteria. Slightly more than half of practices in the final sample were in Illinois, and about a third were in Indiana, with the remainder in Wisconsin (Table 1). Most practices had five or fewer clinicians; 39 (36.4%) had only one clinician, and 48 (44.9%) had between two and five clinicians. According to practice report, about half of practices served a patient population in which less than 25% of patients were racial minorities. About 30% of practices reported participation in an accountable care organization.
Table 1.
Baseline Practice Characteristics
| Characteristic | No. (%) |
|---|---|
|
| |
| Total | 107 |
| State | |
| Indiana | 35 (32.7) |
| Illinois | 56 (52.3) |
| Wisconsin | 16 (15.0) |
| Practice size | |
| Solo practice | 39 (36.4) |
| 2–5 clinicians | 48 (44.9) |
| 6–10 clinicians | 13 (12.1) |
| 11+ clinicians | 7 (6.5) |
| Practice ownership* | |
| Clinician-owned | 56 (60.9) |
| Hospital/health system | 14 (15.2) |
| FQHC/look-alike | 21 (22.8) |
| Rural Health Clinic/Indian Health Service | 1 (1.1) |
| Racial minority patients* | |
| < 25% of practice population | 51 (49.0) |
| 25% to < 50% of practice population | 23 (22.1) |
| ≥ 50% of practice population | 30 (28.8) |
| ACO participation | 32 (29.9) |
N < 107 due to missing data for some items in survey completed by practices.
FQHC, federally qualified health center; ACO, accountable care organization.
Included practices used a total of 11 different EHRs, which are presented in a de-identified list in Table 2; these de-identified EHRs represent some of the largest EHR vendors in the United States, and many are also sold internationally. The number of practices using each EHR ranged from 3 to 22. Rows A–H present data on the 8 EHRs expected to produce rolling 12-month quality measures; a total of 59 practices used these 8 EHRs. The 3 EHRs deemed unable to produce rolling 12-month measures were used by 48 practices (rows I–K).
Table 2.
Quality Measurement Reporting Periods at Included Practices, by De-identified HER
| Practices Using | Reporting Periods, No. of Practices (Row %) |
|||
|---|---|---|---|---|
| EHR | EHR, No. | Rolling 12-Month | Alternate Reporting Periods* | Neither Rolling 12-Month Nor Alternate |
|
| ||||
| A | 4 | 4 (100) | n/a | 0 (0) |
| B | 9 | 9 (100) | n/a | 0 (0) |
| C | 8 | 8 (100) | n/a | 0 (0) |
| D | 10 | 9 (90.0) | n/a | 1 (10) |
| E | 7 | 6 (85.7) | n/a | 1 (14.3) |
| F | 3 | 3 (100) | n/a | 0 (0) |
| G | 8 | 8 (100) | n/a | 0 (0) |
| H† | 10 | 9 (90.0) | n/a | 1 (10) |
| I‡ | 8 | 2 (25.0) | 3 (37.5) | 3 (37.5) |
| J‡ | 22 | 5 (22.7) | 14 (63.6) | 3 (13.6) |
| K§ | 18 | 0 (0) | 16 (88.9) | 2 (11.1) |
| Total | 107 | 63 (58.9%) | 33 (30.8%) | 11 (10.3%) |
Both prior calendar year and rolling 3-month reporting periods.
EHR typically unable to produce rolling 12-month measures, but vendor created customized reports with rolling 12-month reporting periods for this study.
EHR typically unable to produce rolling 12-month measures, but some practices used custom reporting functions to obtain rolling 12-month measures.
EHR unable to produce rolling 12-month measures.
EHR, electronic health record; n/a, not applicable.
Practice size was highly predictive of practices’ use of an EHR that we expected to produce rolling 12-month measures (p < 0.001). Among the 48 practices using the 3 EHRs deemed unable to produce rolling 12-month measures, 31 (64.6%) were solo practices, and only 3 (6.3%) had six or more clinicians. In contrast, among the 59 practices using the 8 EHRs expected to produce rolling 12-month measures, only 8 (13.6%) were solo practices, while 17 (28.8%) had six or more clinicians.
Performance Measures with Preferred Reporting Period
Among practices using the 8 EHRs expected to produce rolling 12-month measures, facilitators obtained all quality measures with the preferred rolling 12-month reporting period from 56 of 59 practices (Table 2; rows A–H). At 7 of 48 practices using EHRs not expected to produce rolling 12-month measures, facilitators used custom reporting functions—from either EHRs or external vendors that practices had contracted with for data processing—to obtain rolling 12-month measures (rows I–K). Across all 107 included practices, 63 (58.9%) met the primary outcome of obtaining all measures with rolling 12-month reporting periods.
Practice size was also highly predictive of practices’ actual ability to produce rolling 12-month measures (p < 0.001). Only 9 of the 63 (14.3%) practices that produced rolling 12-month measures were solo practices, while 30 of the 44 (68.2%) practices that were unable to produce rolling 12-month measures were solo practices.
Performance Measures with Alternate Reporting Periods
Among practices using EHRs not expected to produce rolling 12-month measures, facilitators obtained all measures with alternate reporting periods from 33 practices (30.8% of the full study sample; Table 2—rows I–K). Facilitators were unable to obtain all measures with either preferred or alternate reporting periods—that is, neither the primary nor secondary outcome was met—at 11 of 107 included practices (10.3%).
Reported Barriers
Facilitators reported one or more barriers to data collection for 6 of 11 practices where neither the primary nor secondary outcome was met (Table 3). Reported barriers included practice’s EHR version or license lacked optional features (for example, additional features to compile data and run quality measure reports, or to customize date ranges when running those reports), EHR vendor did not support requested quality reporting functions, EHR vendor would not provide tech support services to a practice facilitator, and EHR could not produce reporting periods across two calendar years (for example, December 2016 to February 2017). Reported barriers occurred across multiple EHRs, without any single EHR being a concentrated source of reported barriers (detailed results by de-identified EHR available from authors upon request).
Table 3.
Practice Facilitators’ Reported Barriers to Obtaining Quality Measures with Requested Reporting Periods
| Barrier | No. (%) |
|---|---|
|
| |
| Practice’s EHR version/license lacks optional features | 2 (18.2) |
| EHR vendor does not support requested quality reporting | 5 (45.5) |
| EHR vendor tech support unavailable to practice facilitator | 1 (9.1) |
| EHR cannot produce reporting periods across two different calendar years | 1 (9.1) |
| Total | 11 |
EHR, electronic health record.
DISCUSSION
In this assessment of quality reporting in a large primary care QI initiative, practice facilitators obtained measures with the preferred rolling 12-month reporting period from only 59% of included practices. Most practices where the primary outcome was not met used an EHR that was deemed unable to produce rolling 12-month measures. Notably, a disproportionate number of small practices used an EHR that was deemed unable to produce rolling 12-month measures. In turn, smaller practices were also less likely to achieve the primary outcome, with solo practices constituting more than two thirds of the practices that were unable to produce rolling 12-month measures.
Including a less stringent secondary outcome, facilitators obtained measures with preferred or alternate reporting periods from about 90% of practices. Among facilitators who were unable to obtain measures with either preferred or alternate reporting periods, there were several barriers to data collection, such as practices lacking optional EHR features to compile quality measure reports or customize reports’ date ranges, and EHR inability to produce a reporting period across two calendar years.
Although all included EHRs were certified for use in the federal Meaningful Use program,8,9 4 of 11 included EHRs could not typically produce quality measure reports with the rolling 12-month time frame of this QI initiative. This disconnect between EHRs’ measurement capabilities and the QI activities of our practice facilitation initiative is dispiriting, but nevertheless unsurprising in the context of the current evidence base. For example, in a prior federally funded practice facilitation initiative, stakeholders determined that reliable and useful measures of diabetes performance were not available in practices’ EHRs, which led them to develop a customized software system to promote QI work.6 In addition, the quantitative results observed here reflect published qualitative findings from the national EvidenceNOW Initiative,11 in which members of seven regional EvidenceNOW cooperatives (including the H3 trial conducted by our study team) reported challenges obtaining clinical quality reports from EHRs with time lines that aligned with QI project time lines.14
Including one EHR vendor that created customized reports with rolling 12-month reporting periods only upon request, the majority of practices under study here (58/107; 54.2%) used an EHR without built-in features to produce quality measures with rolling 12-month reporting periods (Table 2; rows H–K). Although these EHRs’ lack of measurement flexibility does not necessarily preclude successful implementation of QI projects, it may serve as an additional barrier to the already challenging endeavor of real-world QI work. Models for improving primary care delivery rely on tasks such as measurement embedded within QI work15,16 and continuous measurement to support process improvements.17 If EHRs lack flexibility in their quality measurement capabilities going forward, many primary care providers will likely continue to rely on—and need to pay for—external vendors to support data processing for quality measurement activities7 while maintaining negative opinions of EHRs.18,19 Some organizations may elect not to participate in QI activities, limiting progress toward practice transformation and national efforts to transition from volume-based to value-based care.
Our findings point to multiple opportunities to evaluate the extent to which EHR systems have been developed and deployed to support QI activity. Future studies should assess EHRs’ ability to report practice performance over short periods, such as weeks or months, which can allow practices to rapidly identify changes made during QI initiatives. There is also a need nationally for studies of practices’ reliance on non-EHR approaches to quality measurement—such as external data vendors, or manual chart review—and the extent to which these approaches support QI work or are used for other purposes such as payers’ quality reporting requirements. Perhaps most importantly, more research is needed on quality reporting tools available to small practices. As seen in this study, small and solo practices were disproportionately likely not to meet our primary outcome, apparently as a result of small practices’ increased likelihood of using EHRs with limited capabilities to define quality measures’ date ranges.
This study has several limitations. First, we did not collect data on which reporting tools facilitators used (EHR-based reports vs. external vendors), limiting our ability to make inference on individual EHRs’ capabilities. In combination with the fact that one EHR vendor (serving 10 included practices) that is typically unable to generate rolling 12-month measures created customized rolling 12-month reports for this study, our results therefore likely overestimate the ability of included practices’ EHRs to meet the study’s primary outcome. Second, the reporting capabilities we observed might differ from national trends, as the usage rates of individual EHRs in this study may not be representative of national EHR purchasing. Third, we cannot determine the extent to which within-EHR variability was attributable to EHR capabilities (for example, lack of optional features at individual practices), or other factors such as facilitator inexperience or low EHR usability. Fourth, although quality measures with rolling 12-month reporting periods were readily applicable to our one-year QI initiative, these measures were nevertheless subject to limitations such as long lag time (denominators including patients seen up to 12 months ago) that could limit practices’ ability to rapidly detect QI–related changes in quality measures. Fifth, although collection of repeated cross-sectional quality measures facilitated evaluation of practice performance over time, this measurement approach did not necessarily allow practices to identify (or exclude) patients who no longer obtained care at the practice, thereby limiting practices’ ability to evaluate longitudinal outcomes for their current patient population.
CONCLUSION
Practice facilitators were frequently unable to obtain quality measures that aligned with the QI activities and evaluation plan of a large QI initiative. Also, smaller practices were more likely to use EHRs with limited measurement capabilities. Our results indicate that EHR vendors’ compliance with federal reporting requirements is not necessarily sufficient to support real-world QI work. Going forward, there is a need for improvements in the flexibility and usability of EHRs’ quality measurement functions. These improvements can reduce barriers to conducting QI work and in turn lead to improved quality of care and patient outcomes.
Acknowledgments
The authors thank members of the study team who contributed to this manuscript. Dawid Lipiszko entered practice data into study databases, and Isabel Chung prepared data sets for analysis. We thank all practices and practice staff who participated in the Healthy Hearts in the Heartland (H3) study, as well as H3 practice facilitators who implemented quality improvement strategies with practices and collected quality measure reports. We also thank F. Daniel Duffy, MD; David C. Kendrick, MD MPH; and other members of the Healthy Hearts for Oklahoma (H2O) study team who helped validate findings on electronic health record quality measure reporting time frames.
Funding
This project was supported by the Agency for Healthcare Research and Quality (#R18 HS023921).
Footnotes
Conflicts of Interest
All authors report no conflicts of interest.
Contributor Information
David T. Liss, Division of General Internal Medicine and Geriatrics, Northwestern University Feinberg School of Medicine (NUFSM), Chicago.
Yaw A. Peprah, Division of General Internal Medicine and Geriatrics, NUFSM.
Tiffany Brown, Department of Preventive Medicine, NUFSM.
Jody D. Ciolino, NUFSM.
Kathryn Jackson, Center for Health Information Partnerships (CHiP), NUFSM.
Abel N. Kho, CHiP; NUFSM.
Linda Murakami, Quality Improvement, American Medical Association, Chicago.
Theresa L. Walunas, CHiP; NUFSM.
Stephen D. Persell, NUFSM.
References
- 1.Centers for Medicare & Medicaid Services eCQMs 101: Introduction to eCQMs for Use in CMS Programs. Webinar. Sep 18, 2014. [Google Scholar]
- 2.Office of the National Coordinator for Health Information Technology. Clinical Quality and Safety. Jan 23, 2019. https://www.healthit.gov/topic/clinical-quality-and-safety. [Google Scholar]
- 3.Goetz Goldberg D, et al. EHRs in primary care practices: benefits, challenges, and successful strategies. Am J Manag Care. 2012. Feb 1;18:e48–e54. [PubMed] [Google Scholar]
- 4.Kanger C, et al. Evaluating the reliability of EHR-generated clinical outcomes reports: a case study. EGEMS (Wash DC). 2014. Oct 23;2:1102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Brokel JM, Harrison MI. Redesigning care processes using an electronic health record: a system’s experience. Jt Comm J Qual Patient Saf. 2009;35:82–92. [DOI] [PubMed] [Google Scholar]
- 6.Tennison J, et al. The Utah Beacon experience: integrating quality improvement, health information technology, and practice facilitation to improve diabetes outcomes in small health care facilities. EGEMS (Wash DC). 2014. Aug 20;2:1100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Wang JJ, et al. Factors related to clinical quality improvement for small practices using an EHR. Health Serv Res. 2014;49:1729–1746. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Centers for Medicare & Medicaid Services. EHR Incentive Programs: 2015 Through 2017 (Modified Stage 2): Overview. 2014. Accessed Oct 16, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/2015_EHR2015_2017.pdf.
- 9.Centers for Medicare & Medicaid Services. 2017 Modified Stage 2 Program Requirements for Providers Attesting to Their State’s Medicaid EHR Incentive Program. (Updated: Apr 25, 2018.) Accessed Oct 16, 2019. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Stage2MedicaidModified_Require.html.
- 10.Ciolino JD, et al. Design of Healthy Hearts in the Heartland (H3): a practice-randomized, comparative effectiveness study. Contemp Clin Trials. 2018;71:47–54. [DOI] [PubMed] [Google Scholar]
- 11.Cohen DJ, et al. A national evaluation of a dissemination and implementation initiative to enhance primary care practice capacity and improve cardiovascular disease care: the ESCALATES study protocol. Implement Sci. 2016. Jun 29;11:86. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.OSEHRA. popHealth—An Open-Source Quality Measure Reference Implementation. Jun 2019. Accessed Oct 16, 2019. https://www.osehra.org/pophealth.
- 13.Cottington S PopHealth primer. ONC funds open-source software to streamline clinical quality measures reporting for meaningful use program. J AHIMA. 2011;82:48–50. [PubMed] [Google Scholar]
- 14.Cohen DJ, et al. Primary care practices’ abilities and challenges in using electronic health record data for quality improvement. Health Aff (Millwood). 2018;37:635–643. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Wagner EH, et al. The changes involved in patient-centered medical home transformation. Prim Care. 2012;39:241–259. [DOI] [PubMed] [Google Scholar]
- 16.Bodenheimer T, et al. The 10 building blocks of high-performing primary care. Ann Fam Med. 2014;12:166–171. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Institute for Healthcare Improvement (IHI). Going Lean in Health Care, Cambridge, MA: IHI, 2005. IHI Innovation Series white paper. [Google Scholar]
- 18.American Academy of Family Physicians. Moving from Meaningless to Meaningful Use. In the Trenches blog entry. Martin S, editor, Jul 7, 2015. Accessed Oct 16, 2019. https://www.aafp.org/news/blogs/inthetrenches/entry/moving_from_meaningless_to_meaningful.html. [Google Scholar]
- 19.RAND Corp. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. Friedberg MW, et al. 2013. Accessed Oct 16, 2019. https://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR439/RAND_RR439.pdf. [PMC free article] [PubMed]
