Abstract
Background:
Reducing hemoglobin A1c (HbA1c) is essential for patients with poorly controlled diabetes. However, delays in HbA1c testing are common and incomplete electronic health records (EHR) hinder identification of patients who are overdue. We sought to quantify how often an EHR report correctly identifies patients with HbA1c testing delays and to describe potential contributing factors.
Methods:
Using an EHR report, we identified adult patients who had an HbA1c > 9.0% between October 2017 and March 2018 and a suspected delay (e.g., another HbA1c had not resulted within 6 months). We performed a retrospective chart review of 200 randomly selected records to confirm delays in testing. Secondary measures were collected from 93 charts to evaluate associated factors.
Results:
We identified 685 patients with suspected delays. On chart review (N = 200), 82.0% were confirmed. 9.0% of patients had a timely repeat result, but the result was not in a discrete field within the EHR. Another 8.5% were never expected to return. Among a subset of confirmed delays, patients often received lifestyle counseling, but less than half had documented discussions about repeat glycemic testing. Also, 74.2% had a timely follow-up appointment scheduled but the majority (85.5%) were missed.
Conclusion:
Most suspected delays in HbA1c testing were confirmed; however, a substantial minority were misclassified due to missing data or follow-up care outside the health system. Current solutions to improve data quality for HbA1c are labor intensive and highlight the need for better integration of health care data. Missed appointments were commonly noted among patients with delays in care and are a potential target for improvement.
Keywords: diabetes, hemoglobin A1c, delays in care, electronic health records, quality metrics, value-based payments, quality improvement, data quality, data integrity, incomplete records
In the United States, about 1 in 10 people have diabetes and prevalence of the disease continues to increase annually.1,2 Patients with diabetes are at 60% higher risk of early death, have twice the medical costs, and are at higher risk for morbidity, including blindness, vascular disease, and kidney failure.1 Reducing hemoglobin A1c (HbA1c) levels is critical for mitigating the risks of these complications. As such, patients with diabetes require frequent HbA1c monitoring, with American Diabetes Association (ADA) guidelines recommending testing every 3 to 6 months.3 Furthermore, health systems are held accountable for the quality of diabetes care through metrics that affect reimbursement, including a metric based on HbA1c (the percentage of patients aged between 18 and 75 years with poorly controlled diabetes defined by an HbA1c > 9.0% at year-end) that must be reported to the Center for Medicare & Medicaid Services (CMS).4
Studies show that timely assessment of chronic diseases is not uniformly performed due to a variety of patient, provider, and system factors in health care, including missed appointments, limited adherence with recommended testing, physician workload, and onerous test result notification systems.5,6 To assist providers, electronic health record (EHR) reports can be used to extract data and populate clinical decision support (CDS) tools to inform providers when patients are due for services, such as HbA1c testing. However, incomplete electronic data, data entry errors, and lack of sophistication in electronic tools can impact the accuracy of EHR reports to assess and implement quality improvement processes in primary care.7–11 Therefore, using EHR reports to assess frequency and timeliness of diabetes management based on HbA1c results may not adequately identify patients who actually have testing delays due to missingness and lack of data integrity.
Misclassification of patients as having delays in care could lead to diversion of attention and resources away from the patients who truly have the highest need, and this can negatively impact reporting of quality metrics. Furthermore, understanding the factors that may contribute to a delay in HbA1c testing is critical for the design of targeted and cost-effective interventions to improve guideline concordance. In this study, we sought to first evaluate whether an EHR-generated report of HbA1c results accurately identifies patients with poorly controlled diabetes who are overdue for HbA1c testing and then to describe factors that may be contributing to confirmed delays in testing.
METHODS
Inclusion Criteria
Our population was derived from patients seen by primary care providers at ambulatory clinics using the EHR Epic (Epic Systems Corporation, Verona, WI) within a single health system in the Baltimore Metropolitan area of Maryland. Eligible patients were adults (≥18 years old) and had an HbA1c > 9.0%, ordered by an eligible provider, that resulted between October 1, 2017 and March 30, 2018. Patient data was accessed by data analysts using Epic Clarity. =HbA1c data included all values available in structured fields within Epic during the eligibility timeframe and for six months thereafter. From this data, the data analysts provided an EHR report of patients with suspected delays in HbA1c testing. Patients were included in the report if they had a repeat HbA1c that resulted more than six months from the initial abnormal value, or they did not have any identified follow-up test. We cleaned, reviewed, and evaluated this report using Stata (StataCorp LLC, College Station, Texas).
Primary Outcome
Our primary outcome was the proportion of patients who were correctly identified as having a delay in care and included in the EHR report. After chart review, patients were considered “misclassified” if they either (1) had a repeat test result documented within the EHR (e.g., in unstructured data) within 6 months of the abnormal HbA1c test, or (2) were not expected to follow up within the health system following the abnormal test.
Secondary Measures
We evaluated three categories of contributors to confirmed delays in care: (1) communication of the abnormal HbA1c result, (2) management recommendations, and (3) follow-up visits. We defined communication as documented contact with the patient regarding the abnormal result. We also noted whether this occurred within 30 days and the method (e.g., in person, electronic message, telephone call, or letter) used to communicate the result. We extracted whether patients had active access to the patient portal, Epic MyChart, which allows patients to message with their providers and review their results. We defined management recommendations as the actions the provider took to treat the abnormal result or to discuss additional diagnostic testing. Treatment included starting a new medication, adjusting the dose of medications, reinforcing adherence to established medications or lifestyle modifications, or referring the patient to schedule evaluation with another provider (i.e., endocrinologist, diabetic educator, pharmacist for diabetes medication management, or nutritionist). Diagnostic testing included documenting or ordering a repeat HbA1c test or initiating or increasing self-monitoring of blood glucose levels at home. Finally, we evaluated follow-up by whether patients had an appointment scheduled during the six months after the abnormal test result, and if they had any missed appointments in this timeframe, including no shows or cancellations without rescheduling.
Sample Selection
For the primary outcome, we assumed the misclassified fraction would range from 0.05 to 0.20 and determined that a sample size of 200 patients would provide adequate confidence for detection of our point estimate. We selected a random sample of 200 patients who had a suspected delay in care in the report. This sample was stratified by test type (phlebotomy vs. point-of-care testing [POCT]) to match the distribution in the overall dataset. Among the first 100 patients in this sample, we performed additional medical record review for secondary measures if the patient had a confirmed delay in care (i.e., if the patient was not misclassified by the report). Sample selection is summarized in Figure 1.
Figure 1.
This flow chart shows the sample selection for patients with HbA1c > 9.0% with suspected delays in testing for diabetes control. HbA1c, hemoglobim A1c; POCT, point-of-care testing.
Retrospective Chart Review
We conducted a retrospective chart review (J.L.S., D.D.) to identify the variables for the primary and secondary measures following a standardized electronic data abstraction protocol. First, we reviewed the results section in the EHR to identify the next available HbA1c value after the abnormal result and confirmed that it either did not exist or occurred more than six months later. Next, we reviewed all visit notes occurring within the six months after the elevated HbA1c for any documentation of repeat test results. Finally, we reviewed any relevant documents uploaded into the patient’s EHR to the “media” tab, where documents such as outside records are saved as PDF files. Through this review we recorded the data and value of the earliest HbA1c value available within the local EHR after the abnormal result.
Additionally, we identified any documentation (e.g., visit notes, telephone notes, patient messages) indicating that the patient was not expected to return to Johns Hopkins for care within the six-month period following the abnormal test. This included the patient moving to another state, leaving the practice, having a primary care provider (PCP) outside the health system, or being seen for a single visit with no indication that further follow-up was expected. Single visits are typically in urgent care; the After Care Clinic, which is for hospital discharge follow-ups; or through the Executive Health Program, which is used by patients visiting Baltimore from out of town or abroad.
We identified communication of the abnormal result to the patient by reviewing the following: comments embedded in lab results, letters, patient portal messages, and relevant encounter documentation, including visit notes and after-visit instructions, telephone notes, and order-only encounters. We recorded up to three methods of communication, including date of outreach, and extracted all management recommendations (including treatment and diagnostic testing) provided during communication attempts.
We double-reviewed 30% of the sample (J.L.S., D.D.), having two separate extractors review the same charts, to determine interrater reliability (IRR) for classification of the primary outcome. Discrepancies between raters were adjudicated by the principal investigator (S.I.P.).
Statistical Analysis
All analyses were performed using Stata with statistical significance set as an alpha < 0.05 for two-sided t-tests. IRR was calculated as crude percent agreement and Cohen’s kappa. This study was approved by the Johns Hopkins Institutional Review Board and informed consent was waived.
RESULTS
We identified 1923 patients with 2955 HbA1c results > 9.0% ordered by 173 PCPs during the study period. Among them, 685 patients (35.6%) had a suspected delay in testing based on the EHR report (Table 1). On average, patients with a suspected delay were 55 years old and had an HbA1c of 10.7%, with 60.6% of tests completed as POCT. The randomly selected study population appeared representative of the overall population.
Table 1.
Characteristics of Patients with HbA1c >9% Without Repeat Testing Within 6 Months (e.g., Suspected to Have a Delay in Care)*
All patients with suspected delay in care (N = 685) | Study population† (N = 200) | |
---|---|---|
Patient characteristics | ||
Age [mean (SD)] | 54.4 (13.1) | 54.7 (12.4) |
Sex | ||
Male | 355 (51.8) | 101 (50.5) |
Female | 330 (48.2) | 99 (49.5) |
Race | ||
White or Caucasian | 286 (41.8) | 88 (44.0) |
Black or African American | 310 (45.3) | 81 (40.5) |
Other | 89 (13.0) | 31 (15.5) |
Test characteristics | ||
HbA1c % [mean (SD)]‡ | 10.7 (1.4) | 10.7 (1.4) |
Test type§ | ||
Point of care | 415 (60.6) | 121 (60.5) |
Blood test | 270 (39.4) | 79 (39.5) |
Unique ordering providers | 156 | 101 |
Patients identified by EHR report in an ambulatory care setting. Values are counts (%) unless otherwise indicated.
Study population was randomly generated from all patients who had a suspected delay in care.
Initial HbA1c result that was > 9%, triggering inclusion in report.
Sample was stratified by test type before randomization to ensure representative distribution of test type.
Primary Outcome
On chart review (N = 200), 82.0% of patients (N = 164, CI 76.0% to 87.1%) were correctly identified by the EHR report as having a delay. Among the patients who were misclassified (N = 36, 18.0%), 50.0% (N = 18) had a repeat test within 6 months, another 47.2% (N = 17) were not expected to follow up, and one patient had a data entry error (their hemoglobin was erroneously entered as their HbA1c) that resulted in misclassification that was not otherwise captured (Table 2). Timely results that were available in the EHR were primarily located in unstructured data from outside records uploaded as PDFs. IRR for the primary outcome was 91.7% (κ = 0.71).
Table 2.
Accuracy of an EHR-generated Report to Identify Patients with Uncontrolled Diabetes (HbA1c >9%) Who Had Delays in Repeat Testing, and Cause for Misclassifications Identified on Chart Review*
Study population (N = 200) | |
---|---|
Confirmed delay in care | 82 (75–87) |
Misclassifications | 18 (13–25) |
Timely result available in EHR | 9 (5–14) |
Data entry error | 1 (0–3) |
Follow up not expected | 8.5 (5–13) |
Single visit† | 3 |
Moved out of state‡ | 2.5 |
Left practice‡ | 2 |
PCP outside of JHHS | 1 |
Results are listed as % of the study population with (95% CI) where appropriate.
Patient was seen in urgent care, After Care Clinic (following hospital admission), or Executive Health (utilized by patients visiting Baltimore from out of town or abroad). These patients were not expected to continue follow up for primary care.
Within 6 months of elevated HbA1c test.
her, electronic health record; PCP, primary care physician; JHHS, Johns Hopkins Health System.
Two additional data errors were identified during the analysis. The first was noted during data cleaning; a patient had an HbA1c result that was not physiologically possible, and the record was removed before randomization. The second instance was noted during chart review; the date of the patient’s repeat HbA1c was entered incorrectly, making it appear that their repeat test was delayed when in fact it was done within 6 months. This record is included among the misclassified patients reported for the primary outcome.
Secondary Measures
From the first 100 randomly sampled charts, 93 patients had a confirmed delay in care (Table 3). In 95.7% of these cases (N = 89), communication of the abnormal HbA1c result was attempted within the first 30 days after the result. Results were primarily communicated during in-person encounters (N = 66, 72.5%), driven by POCT. Letters were the second most common modality (N =13, 14.3%). Sixty percent of all patients in the sample (N = 60) had active patient portal accounts with Epic MyChart. At the time of result communication, 83.9% of providers (N = 78) made management recommendations, most commonly lifestyle modification (N = 47, 60.3%) followed by initiation of a new medication (N = 37, 47.4%). Reinforcement of medication adherence was infrequently documented (N = 15, 19.2%). Finally, 29.5% of the providers who made recommendations placed at least one new referral or suggested return to an established provider for follow-up (N = 23). Other providers included endocrinology (N = 11, 47.8%), diabetic educators (N = 1, 4.3%), pharmacists (N = 9, 39.1%), and nutritionists (N = 3, 13.0%).
Table 3.
Descriptive Review of Potential Factors Contributing to Confirmed Delays in HbA1c Testing for Patients with Uncontrolled Diabetes (N = 93)
Communication of result* [HbA1c >9%] (N [%]) | |
---|---|
Any communication attempted | 91 (98) |
Attempted within 30 days | 89 (96) |
Method of communication (N = 91)* | |
In-person | 66 (73) |
Electronic message | 7 (8) |
Telephone call | 5 (5) |
Letter | 13 (14) |
Follow-up * | |
Appointment scheduled | 69 (74) |
Missed appointment (N = 69)† | 59 (86) |
No show‡ | 30 (44) |
Patient cancellation‡ | 37 (54) |
Recommendations* (N [%]) | |
Any management recommendations | 78 (84) |
Treatment recommendations (N=78) | |
New medications | 37 (47) |
Medication dose change | 30 (38) |
Medication adherence | 15 (19) |
Lifestyle modification | 47 (60) |
Referral to specialist | 23 (29) |
Any recommendations for diagnostic testing | 45 (48) |
Diagnostic testing recommendations (N = 45) | |
Repeat HbA1c | 9 (20) |
Self-monitoring | 42 (93) |
Within 6 months of test result.
At least one instance without successful rescheduling within 6 months of test result.
A single patient could have both no shows and cancellations.
Providers recommended further diagnostic testing to 45 patients (48.4%), primarily self-monitoring of blood glucose (N = 42); the desired interval for repeat HbA1c testing, however, was infrequently documented or ordered (N = 9). During the 6-month period after the abnormal HbA1c, 74.2% (N = 69) of the patients had a follow-up appointment scheduled. However, the majority missed at least one scheduled appointment (N = 59, 85.5%).
DISCUSSION
Major Findings
Among patients with poorly controlled diabetes with a suspected delay in HbA1c testing exceeding 6 months, 82.0% of delays were confirmed on medical record review. Nine percent of patients were misclassified because they were never expected to follow up within our health system but were still captured in the report. The remaining 9.0% had timely test results completed outside our hospital system that were stored in media files or documented in notes rather than structured data. Among a subset of patients who had a confirmed delay in care, recommendations for follow-up testing were documented in about half of all cases and —many had a follow up scheduled (74.2%), but the majority of these patients (85.5%) missed their appointments or cancelled without rescheduling.
Our study exemplifies the difficulty of extracting accurate information from the EHR about delays in diabetes care. Our chart review suggests opportunities to reduce classification error. For example, we could redefine the locations captured by the report (e.g., eliminate urgent care settings). Whilethis would decrease misclassification in our sample by approximately 3%, many providers work in multiple settings or see patients in their regular context for acute visits without expectation of follow up. To help, we could limit the report to patients who are active on a PCP panel to exclude patients who have moved or receive continuity care outside our health care system. However, PCP attribution is not uniformly updated as patients navigate into or out of clinics; without manual updates, information about which patients have established care or left the practice may remain in unstructured data only. Our results also demonstrate how EHR reports can be prone to classification errors due to missingness in structured fields. Since diabetes care is inherently collaborative, patients often see providers from outside locations or institutions. Even when these locations use the same EHR, where data may be shared between systems (i.e., via Care Everywhere for regional Epic users12), queries from the local EHR will not include these results. In the absence of an integrated electronic health information exchange (HIE), these data, if obtained, need to be transferred manually into structured fields within the local EHR to be captured by a report and to be included when evaluating quality metrics. This task requires a standardized process, supported by staff or data entry personnel, and is subject to error. In our sample, failure to transfer data from outside records into structured fields drove misclassification.
Data integrity is a critical concern for health information technology.11 Though it has been shown that using an EHR generally improves quality of care delivered,13 data errors and incomplete records can increase the risk of patient harm, undermine providers’ trust in the EHR, and potentially lead to increased health care costs.11,14,15 Our findings are consistent with several studies commenting on the difficulty of capturing and measuring metrics using EHR data for internal and external quality reporting, both generally and for diabetes care specifically.7,16–22 Our study adds to this literature by quantifying the impact of incomplete data for HbA1c testing on evaluation of delays in care for patients with poorly controlled diabetes. Systematic misclassification, as demonstrated in our study, can impact the usefulness of EHR data to inform quality improvement efforts for this vulnerable population and our ability to equitably allocate resources.
Misclassification will also affect merit-based reimbursement for health care services, where payments are based on outcomes for specific quality metrics designed to incentivize high-value care.23 For example, hospitals and practices receive higher payments if patients with diabetes achieve certain criteria for the HbA1c metric.4 By 2016, CMS had tied 30% of all payments, or more than $117 billion, to quality of care through value-based payments.22,24 However, studies have shown that many performance metrics do not meet standards of measurement validity,25 while individual practices spend thousands of dollars and hundreds of hours a year calculating and reporting quality outcomes.26
Unfortunately, incomplete EHR data, such as that noted here, can lead to inaccurate reporting. A study comparing use of data from a single EHR to the corresponding HIE noted a 15% difference in calculated metrics.19 For the HbA1c poor control metric in particular, calculation of patient compliance to testing increased by nearly 11% when combining the two data sources.19 This further demonstrates the importance of using several data sources (e.g., the local EHR in combination with an HIE) for data completeness. However, across health organizations, use of various EHRs with differing methods for recording data has led to variability in reporting performance measures for diabetes care and limited ability to link systems.27 Use of standardized naming and coding conventions for laboratory tests, such as the Logical Observation Identifiers Names and Codes (LOINC) system, across institutions is intended to improve interoperability and data sharing, which would extend to HbA1c testing.28 A recent systematic review noted, however, that implementation of the LOINC system varies across laboratories and requires intensive management for selection of the correct codes for a given test, resulting in multiple barriers to integration of laboratory results across health systems.28 POCT for HbA1c, while critical for rapid individual decision making during clinical encounters, may present an additional challenge to data completeness as tests that are not laboratory based may not be reported to an HIE. This highlights a gap to improving completeness in reports to identify delays in care for HbA1c testing for patients with poorly controlled diabetes.
Secondary Findings
This analysis identified a high frequency of missed appointments among patients with poorly controlled diabetes and delays in care. Providers recommended followed up testing in less than half of cases, though this may reflect documentation rather than intended practice. This might be especially true for POCT, where it is anticipated that the patient will have repeat testing done when they return to the clinic. Patients who miss appointments are known to have poor diabetic outcomes, including worse glycemic control29–32 and increased mortality.30 In one study, a missed appointment rate of > 20% over a three-year period was associated with worse HbA1c results compared to patients who missed fewer visits, and suggested a dose-response dependent on the rate of missed appointments.29 In our sample, the number of patients who missed their follow-up appointment during the six-month interval after their abnormal results was strikingly high, suggesting a primary target for interventions to decrease delays in testing specifically.
Despite limitations of these data, our health care organization uses EHR data extensively for quality improvement in primary care. To address errors in attribution of patients, providers are encouraged to review their panels at the beginning of each calendar year to identify patients who are no longer under their care. To address missingness of data, practices are provided with lists of patients who have claims for tests that are not in the EHR, including HbA1c. When data are identified through this process or through provider review of Care Everywhere or our state-wide HIE, there is a standard workflow for data entry into structured locations in the EHR. Practices are also provided with lists of patients who have missing HbA1c data so staff can conduct outreach and schedule appropriate follow up visits and/or laboratory testing. To address barriers to care, the health system is developing a systematic process to screen for social needs and refer patients to appropriate resources (e.g., social work, case management, pharmacy). Through the Baltimore Metropolitan Diabetes Regional Partnership, a collaboration between Johns Hopkins University, the University of Maryland, and community partners, patients with diabetes will have increased access to diabetes self-management training and wraparound services to improve engagement with chronic care.33 Altogether, these interventions are designed to address system- and patient-level needs to improve diabetes care delivery. However, these potential solutions are also labor intensive, particularly transfer of data into structured fields, and require dedicated personnel. This highlights the need for multi-level system changes, beyond the institutional level, to improve interoperability broadly through more efficient means.
Our results suggest that delays in HbA1c testing may be an intervenable step within the diabetes care pathway for patients with poorly controlled diabetes who we know are at especially high risk for morbidity and mortality. Interventions targeting follow ups and rescheduling missed appointments to limit delays in care may be a good strategy to capture this at-risk population. Future studies understanding how patients with and without delays in care access specialty services in our population could help to maximize referrals to services that are associated with improved outcomes (e.g., pharmacy, diabetes education).
Limitations
This was a retrospective chart review from a single EHR and may not be generalizable to settings that employ a different practice model or have a dissimilar patient mix. For example, an enclosed system, such as an integrated managed care consortium (e.g., Kaiser Permanente), may have less difficulty properly assigning patients to PCPs and will have automated sharing of results for HbA1c testing. Additionally, locations where there are fewer options for care (e.g., rural areas that have decreased access to endocrinologists34) may see less movement between providers and systems than our urban-suburban practices. These systems likely have unique barriers and facilitators to care that are not reflected here.
To ensure a rigorous design, we used standardized data extraction methods and double-reviewed charts for our primary outcome to optimize validity. We also stratified our study population by HbA1c assay type (serum vs. POCT) to account for confounding and had a balanced representation of patients who identified as Black or African American, allowing our results to be applied across a spectrum of populations. It is possible that patients who were identified as having delays based on our review could have had appropriate follow-up testing done that was not captured in our local EHR (for example, reported in our linked HIE, which is restricted from inclusion in research activities). However, if this were common, our data would in fact underestimate the rate of misclassification. Our EHR report only included patients that had a suspected delay in HbA1c testing, so we were unable to compare patients with versus without delays. Future studies comparing factors, including testing type and timing of result notification, use of specialty services, and rates of missed appointments, between patients with and without testing delays for type 2 diabetes, may identify how to leverage communication and follow up as strategies to improve patient engagement. Additionally, understanding how recent policy changes to expand access to telemedicine has impacted HbA1c testing will be important to address any gaps in care that may emerge due to this technology. Our evaluation of contributing factors was limited to written documentation and may not be representative of clinical intent or recommendations made verbally during patient interactions. We did, however, review orders for activity related to diabetes care (e.g., HbA1c orders or specialty referrals) to ensure we captured the greatest detail within the limitations of the EHR.
CONCLUSION
We developed an EHR report to capture delays in HbA1c testing for patients with poorly controlled diabetes. Most suspected delays in HbA1c testing were confirmed; however, a substantial minority were misclassified due to missing data or follow-up care outside the health system. Current solutions to improve data quality for HbA1c are labor intensive and highlight the need for better integration of health care data. Missed appointments were commonly noted among patients with delays in care and are a potential target for improvement.
Acknowledgements
J.L.S. and D.D. were supported by the National Heart, Lung, and Blood Institute (5T32HL007180, PI: Hill-Briggs and 5T32HL110952, PI: Alan Pack, respectively). Neither institute was involved in conducting this research or in preparation of this article. S.I.P. received support from the Johns Hopkins Institute for Clinical and Translational Research (ICTR), which is funded in part by Grant Number KL2TR001077 from the National Center for Advancing Translational Sciences (NCATS), a component of the National Institutes of Health (NIH), and the NIH Roadmap for Medical Research. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official view of the Johns Hopkins ICTR, NCATS, or NIH. Jamie Briddell, RN and Maura Mohan, RN contributed to development of the standardized operating procedure for data extraction, and Jamie Briddell, RN performed secondary chart review for interrater reliability.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Conflicts of Interest
S.I.P is an Assistant Editor for The Joint Commission Journal on Quality and Patient Safety. N.M.M is the co-inventor of a virtual diabetes prevention program. Under a license agreement between Johns Hopkins HealthCare Solutions and the Johns Hopkins University, N.M.M. and the University are entitled to royalty distributions related to this technology. This arrangement has been reviewed and approved by the Johns Hopkins University in accordance with its conflict-of-interest policies. This technology is not described in this study. JLS is a co-investigator on a research project funded by NovoNordisk Inc. The primary aim of the project is to create and pilot a clinical decision support tool to assist clinicians when talking to their patients about weight and obesity treatment. This project is not addressed or referenced in this publication.
References
- 1.CDC. National diabetes statistics report, 2020 Centers for Disease Control and Prevention, US Department of health and Human Services. 2020. https://www.cdc.gov/diabetes/pdfs/data/statistics/national-diabetes-statistics-report.pdf. Accessed October 8, 2021.
- 2.Gallup-Sharecare. State of american well-being: 2017 state and community rankings for the prevalence of diabetes 2018:12. https://wellbeingindex.sharecare.com/wp-content/uploads/2018/11/Gallup-Sharecare-State-of-American-Well-Being_2017-Diabetes-Rankings_vFINAL.pdf. Accessed October 8, 2021.
- 3.American Diabetes A. 6. Glycemic Targets: Standards of Medical Care in Diabetes-2021. Diabetes Care 2021;44(Suppl 1):S73–S84. [DOI] [PubMed] [Google Scholar]
- 4.CMS-QPP. Quality ID#1 (NQF 0059): Diabetes: Hemoglobin A1c poor control 2019. https://qpp.cms.gov/docs/QPP_quality_measure_specifications/CQM-Measures/2019_Measure_001_MIPSCQM.pdf. Accessed October 8, 2021.
- 5.Hysong SJ, Sawhney MK, Wilson L, et al. Understanding the management of electronic test result notifications in the outpatient setting. BMC Med Inform Decis Mak 2011;11:22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Singh H, Thomas EJ, Sittig DF, et al. Notification of abnormal lab test results in an electronic medical record: do any safety concerns remain? Am J Med 2010;123(3):238–244. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Cohen DJ, Dorr DA, Knierim K, et al. Primary Care Practices’ Abilities And Challenges In Using Electronic Health Record Data For Quality Improvement. Health Aff (Millwood) 2018;37(4):635–643. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Hogan WR, Wagner MM. Accuracy of data in computer-based patient records. J Am Med Inform Assoc 1997;4(5):342–355. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Smith PC, Araya-Guerra R, Bublitz C, et al. Missing clinical information during primary care visits. JAMA 2005;293(5):565–571. [DOI] [PubMed] [Google Scholar]
- 10.Weiskopf NG, Weng C. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research. J Am Med Inform Assoc 2013;20(1):144–151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Bowman S Impact of electronic health record systems on information integrity: quality and safety implications. Perspect Health Inf Manag 2013;10:1c. [PMC free article] [PubMed] [Google Scholar]
- 12.Winden TJ, Boland LL, Frey NG, Satterlee PA, Hokanson JS. Care everywhere, a point-to-point HIE tool: utilization and impact on patient care in the ED. Appl Clin Inform 2014;5(2):388–401. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Kern LM, Kaushal R. Electronic health records and ambulatory quality. The authors’ reply. J Gen Intern Med 2013;28(9):1133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Rathert C, Porter TH, Mittler JN, Fleig-Palmer M. Seven years after Meaningful Use: Physicians’ and nurses’ experiences with electronic health records. Health Care Manage Rev 2019;44(1):30–40. [DOI] [PubMed] [Google Scholar]
- 15.Chase DA, Ash JS, Cohen DJ, Hall J, Olson GM, Dorr DA. The EHR’s roles in collaboration between providers: A qualitative study. AMIA Annu Symp Proc 2014;2014:1718–1727. [PMC free article] [PubMed] [Google Scholar]
- 16.Benkert R, Dennehy P, White J, Hamilton A, Tanner C, Pohl JM. Diabetes and hypertension quality measurement in four safety-net sites: lessons learned after implementation of the same commercial electronic health record. Appl Clin Inform 2014;5(3):757–772. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Hersey CL, Tant E, Berzin OKG, Trisolini MG, West SL. Moving from Quality Measurement to Quality Improvement: Applying Meaningful Use Lessons to the Quality Payment Program. Perspect Health Inf Manag 2019;16(Fall):1b. [PMC free article] [PubMed] [Google Scholar]
- 18.Chan KS, Fowles JB, Weiner JP. Review: electronic health records and the reliability and validity of quality measures: a review of the literature. Med Care Res Rev 2010;67(5):503–527. [DOI] [PubMed] [Google Scholar]
- 19.D’Amore JD, McCrary LK, Denson J, et al. Clinical data sharing improves quality measurement and patient safety. J Am Med Inform Assoc 2021;28(7):1534–1542. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Johnson SG, Speedie S, Simon G, Kumar V, Westra BL. Quantifying the Effect of Data Quality on the Validity of an eMeasure. Appl Clin Inform 2017;8(4):1012–1021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Liss DT, Peprah YA, Brown T, et al. Using Electronic Health Records to Measure Quality Improvement Efforts: Findings from a Large Practice Facilitation Initiative. Jt Comm J Qual Patient Saf 2020;46(1):11–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Tamang SR, Hernandez-Boussard T, Ross EG, Gaskin G, Patel MI, Shah NH. Enhanced Quality Measurement Event Detection: An Application to Physician Reporting. EGEMS (Wash DC) 2017;5(1):5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.CMS. Alternative payment models and the quality payment program https://innovation.cms.gov/innovation-models/qpp-information. Published 2021. Accessed October 14, 2021.
- 24.HHS reaches goal of tying 30 percentof Medicare paymentsto quality ahead of schedule [press release] HHS.gov 2016.
- 25.MacLean CH, Kerr EA, Qaseem A. Time Out - Charting a Path for Improving Performance Measurement. N Engl J Med 2018;378(19):1757–1761. [DOI] [PubMed] [Google Scholar]
- 26.Casalino LP, Gans D, Weber R, et al. US Physician Practices Spend More Than $15.4 Billion Annually To Report Quality Measures. Health Aff (Millwood) 2016;35(3):401–406. [DOI] [PubMed] [Google Scholar]
- 27.Selvin E, Narayan KMV, Huang ES. Quality of Care in People With Diabetes. In: Cowie CC rd, Casagrande SS, et al. , eds. Diabetes in America Bethesda (MD) 2018. [Google Scholar]
- 28.Stram M, Gigliotti T, Hartman D, et al. Logical Observation Identifiers Names and Codes for Laboratorians. Arch Pathol Lab Med 2020;144(2):229–239. [DOI] [PubMed] [Google Scholar]
- 29.Schectman JM, Schorling JB, Voss JD. Appointment adherence and disparities in outcomes among patients with diabetes. J Gen Intern Med 2008;23(10):1685–1687. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Currie CJ, Peyrot M, Morgan CL, et al. The impact of treatment noncompliance on mortality in people with type 2 diabetes. Diabetes Care 2012;35(6):1279–1284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Karter AJ, Parker MM, Moffet HH, et al. Missed appointments and poor glycemic control: an opportunity to identify high-risk diabetic patients. Med Care 2004;42(2):110–115. [DOI] [PubMed] [Google Scholar]
- 32.Nguyen DL, Dejesus RS, Wieland ML. Missed appointments in resident continuity clinic: patient characteristics and health care outcomes. J Grad Med Educ 2011;3(3):350–355. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Maryland.gov. Regional Partnership Catalyst Program https://hscrc.maryland.gov/Pages/regional-partnerships.aspx. Published 2021. Accessed March 5, 2022.
- 34.Lu H, Holt JB, Cheng YJ, Zhang X, Onufrak S, Croft JB. Population-based geographic access to endocrinologists in the United States, 2012. BMC Health Serv Res 2015;15:541. [DOI] [PMC free article] [PubMed] [Google Scholar]