Skip to main content
Ophthalmology Science logoLink to Ophthalmology Science
. 2023 Oct 6;4(1):100409. doi: 10.1016/j.xops.2023.100409

The Impact of Documentation Workflow on the Accuracy of the Coded Diagnoses in the Electronic Health Record

Thomas S Hwang 1,, Merina Thomas 1, Michelle Hribar 1,2, Aiyin Chen 1, Elizabeth White 1
PMCID: PMC10694743  PMID: 38054107

Abstract

Objective

To determine the impact of documentation workflow on the accuracy of coded diagnoses in electronic health records (EHRs).

Design

Cross-sectional study.

Participants

All patients who completed visits at the Casey Eye Institute Retina Division faculty clinic between April 7, 2022 and April 13, 2022.

Main Outcome Measures

Agreement between coded diagnoses and clinical notes.

Methods

We assessed the rate of agreement between the diagnoses in the clinical notes and the coded diagnosis in the EHR using manual review and examined the impact of the documentation workflow on the rate of agreement in an academic retina practice.

Results

In 202 visits by 8 physicians, 78% (range, 22%–100%) had an agreement between the coded diagnoses and the clinical notes. When physicians integrated the diagnosis code entry and note composition, the rate of agreement was 87.9% (range, 62%–100%). For those who entered the diagnosis codes separately from writing notes, the agreement was 44.4% (22%–50%, P < 0.0001).

Conclusion

The visit-specific agreement between the coded diagnosis and the progress note can vary widely by workflow. The workflow and EHR design may be an important part of understanding and improving the quality of EHR data.

Financial Disclosure(s)

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

Keywords: Data quality, Electronic health records, Problem-oriented charting


The use of large-scale, real-world data holds promise to improve our understanding of a wide range of conditions, both rare and common.1 However, concerns about the quality of data from electronic health records (EHRs) may limit the validity and acceptance of these studies.2 For example, a recent paper highlighted how institutions spend significant resources manually verifying and adjusting the coded diagnoses for quality measurement and reimbursement to avoid possible financial penalties.3 If the EHR data as-is is not good enough to make accurate inferences about quality outcomes, it is possible that it is not good enough for research.

The accuracy of coded diagnoses in the EHR is of fundamental importance in Big Data research,1,4 and there is a need to understand the factors that influence it. Studies of EHR data quality have typically assessed the usefulness of the data for specific use cases, such as disease surveillance,5 and predictive models for specific outcomes.6 Some studies have assessed the accuracy of coded data,7 but as this often requires a manual review,8 there are few studies assessing the concordance between the coded data and the clinical documentation or the factors that influence it, such the physician workflow and EHR design.9 These factors, if associated with improved data quality, may be helpful in informing the best practices for EHR use and design. This study examines the encounter-specific agreement between the International Classification of Disease 10 (ICD10) diagnoses in the EHR and the diagnoses found in the progress note of an academic retina clinic and its relationship to the documentation workflow.

Methods

This study adhered to the Declaration of Helsinki, and the Oregon Health and Science University’s Institutional Review Board approved the study. A requirement for participants' consent was waived by the Institutional Review Board. A reviewer (T.S.H.) extracted the diagnoses from the progress notes of visits to the faculty retina clinic at the Casey Eye Institute between April 7, 2022, and April 13, 2022, and compared them to the ICD10 diagnosis codes in the Epic EHR system from the same visits. In addition to agreement with the progress note, the diagnosis codes were assessed for the appropriate specificity, severity, and laterality according to the Center for Medicare and Medicaid Services guidelines,10 which stipulate that (1) coded diagnoses are to reflect documented diagnoses or the reason for visit; (2) if a diagnosis exists that explains the reason for visit, it should be recorded instead of the symptom; (3) although nonspecific codes, such as retinal edema, are allowed when there is clinical uncertainty, the most specific clinically documented code should be used when possible; and (4) laterality and severity should be specified when clinically known and available for the specific code.

The reviewer then categorized the agreement between the progress note and the diagnosis codes into TRUE or FALSE. The reasons for disagreement were categorized as follows: (1) wrong severity (e.g., severe nonproliferative diabetic retinopathy was coded as mild nonproliferative diabetic retinopathy), (2) wrong or missing laterality, (3) incorrect codes unrelated to any diagnosis in the note, (4) missing codes (a documented diagnosis was not coded), and (5) nonspecific codes, meaning a nonspecific code, such as H35.81 (retinal edema), was used when a more specific diagnosis was evident in the progress note, such as branch retinal vein occlusion with macular edema.

The documentation workflows were categorized into integrated charting, in which the physician used the coded diagnoses to create the narrative section of the note, and independent charting, in which the physician entered the diagnosis codes separately from composing the note. Integrated charting includes problem-based charting, which creates the progress note by compiling narratives notes attached to specific diagnoses in the problem list section and using a function within Epic called “SmartLink” that pulls the diagnoses selected for the encounter as the starting point to create the assessment and plan notes. We compared the rate of agreement between integrated and independent charting using Generalized Estimating Equations logistics regression models with exchangeable correlation matrices.

Results

The study included 202 encounters with 8 physicians. The physicians were on the same EHR system for an average of 12.1 years (range, 6–16 years). The encounters included 388 unique ICD10 codes. Age-related macular degeneration-related codes (H35.30-H35.32XX) were the most frequent (n = 77), followed by diabetic retinopathy-related codes (E10.3XXX and E11.3XXX, n = 50).

Of these encounters, 158 (78.2%) had accurate ICD10 codes that reflected the diagnoses listed in the progress note with the specificity that met the Center for Medicare and Medicaid Services standard. Of the 44 encounters with mismatches, 15 (34.1%) were because of wrong disease severity, 15 (34.1%) were because of missing codes, 5 (11.4%) were because of incorrect codes, 5 (11.4%) were because of the use of nonspecific codes, and 4 (9.1%) were because of wrong laterality.

Per physician (Fig 1), the rate of agreement ranged from 22.2% to 100%. Of the 8 physicians, 6 physicians used integrated charting, of which 5 used the problem-based charting function, and 1 used a SmartLink that pulled the coded visit diagnoses. Two of the 8 physicians used an independent workflow. Those that used the integrated charting method had an agreement rate of 87% (range, 62%–97%), and those that used independent charting had a match rate of 44% (range 22%–50%; P < 0.0001).

Figure 1.

Figure 1

Agreement rate per physician. Physicians 1-5 used problem-oriented charting (green), physician 6 used the visit-diagnosis SmartLink (blue), and physicians 7-8 used independent charting (orange).

Discussion

Our study found that the quality of coded diagnosis data using the Center for Medicare and Medicaid Services standard varied widely from physician to physician. The visit-level agreement between coded diagnoses and the progress notes was significantly better when physicians used integrated charting by using the coded diagnoses to compose the progress note. This finding suggests that the EHR coding errors may be distributed in a nonrandom fashion, and the understanding of the workflow and EHR design may be important in evaluating the data quality.

To our knowledge, this study is the first to suggest that a certain EHR design or documentation workflow could affect the data quality in the outpatient setting. Li et al9 reported that problem-based charting can improve the quality of coded data in the EHR in the inpatient setting. Electronic health records systems offer variable levels of integration of structured data entry and note composition. Epic, the system studied here, offers a variety of available workflows with different levels of constraints in documentation, including problem-oriented charting, SmartLink-based charting, as well as free-form progress notes. Other systems, such as the Computerized Patient Record System that the Veterans Administration hospitals use, rely mostly on free-text progress notes. In contrast, NextGen’s EHR, by default, requires integrated charting, in which coded diagnoses are the required headers for the assessment and plan section. These user-interface level design choices could affect the quality of data in different systems and should be considered when assessing the data quality.

We found that an integrated workflow was associated with a higher rate of agreement but did not ensure it. Mismatches with integrated workflow occurred when physicians documented additional or contradictory diagnoses in the narrative sections of the note. This reflects the reality of individual physician practices and the nature of the integrated workflow, which requires more “clicks” to enter the discrete diagnoses compared with simply typing in the narrative. However, there are times when the available diagnosis codes do not sufficiently capture the clinical uncertainty or the interplay between different conditions. Some flexibility may be necessary for good clinical documentation, even if it comes at the cost of data quality.11

One reason for the heterogeneity of workflows is the lack of defined best-practice patterns for EHR documentation and coding. Outside of critiques of recognized undesirable practices, such as copy forward12 and note bloat,13 few EHR best-practice guidelines exist. Although Weed’s14 classic discourse on problem-oriented charting outlines the ideal of “medical records that guide and teach,” this is not universally accepted as the best practice because there are disagreements about how to address its shortcomings, such as working with multiple related problems and diagnostic uncertainty.15 In addition, problem-oriented charting per se does not necessarily address data quality for secondary use because the principles of problem-oriented charting were used by those physicians who use free-form progress notes with separate coding practices. It appeared that the specific workflow that integrates the diagnostic code entry with composition of the note was essential to improve the data quality.16 Furthermore, other goals of EHR best practices, such as physician wellness,17 regulatory compliance, efficiency, and communication with patients and other providers18 may not be met with this approach. Evidence-based EHR best-practice guidelines that balance these goals are needed.

Another possible reason for disagreement between coded diagnosis and the progress note is the use of copy forward. It is a common practice to copy forward the assessment and plan section from visit to visit and use it as a “parking lot” for relevant historical information, even if not all of the issues on the list are addressed at each visit. The physician may only code the diagnoses that they addressed in the visit, but may not make it clear in the note that there were diagnoses that were not addressed, resulting in discrepancy. However, even with a longitudinal review of the notes, it may not be possible to determine what the physician’s intent was unless the note was specifically edited to indicate which of the copy-forwarded diagnoses were addressed at the visit. This, in addition to avoiding the propagation of incorrect information, may be another good reason to discourage this practice.

This study is limited by its small size and nonrandomized, retrospective design. There may also have been individual physician-specific factors unrelated to workflow, such as familiarity with and commitment to coding requirements and facility with the system that influenced the results of the study. However, all the clinical practices had similar patient mix and complexity. The physicians all had used the same system for ≥ 6 years, minimizing the effect of experience and training. Also, the study relied on a single reviewer to determine the accuracy of the diagnosis. Although the rubric was straightforward, it nevertheless requires some judgment. This may have made the evaluations more subjective than a study performed with multiple reviewers. We also did not examine if the integrated workflow, which is more structured, could have had a negative impact on the efficiency and wellness of the physician.19 Also, the study was not designed to detect any differences in patient outcomes. Finally, this study was conducted in a single EHR system in a single subspecialty clinic in an academic center. A study examining the effect of workflow on data quality across EHR systems and different subspecialties is necessary to generalize our findings.

In conclusion, there is considerable variability in agreement between the progress note and the coded diagnostic codes. The physicians that used an integrated documentation workflow had a significantly higher rate of agreement between the notes and the coded diagnoses. The EHR workflow and design may be an important part of understanding and improving the quality of EHR data.

Manuscript no. XOPS-D-23-00193R1.

Footnotes

Disclosures:

All authors have completed and submitted the ICMJE disclosures form.

The authors made the following disclosures:

A.C.: Grant – National Eye Institute K12 Grant; Support – Casey Eye Institue academic allowance.

E.S.W.: Support – National Institutes of Health (NIH), Grant from Research to Prevent Blindness to Casey Eye Institute.

T.S.H.: Support – National Institutes of Health, Research to Prevent Blindness.

M.H.: Grants – NIH R21LM013645, NIH R01LM013426; Payment – Moran Eye Center, University of Utah.

The other author has no proprietary or commercial interest in any materials discussed in this article.

Supported by the National Institutes of Health (grant no.: P30EY010572) and unrestricted departmental funding from Research to Prevent Blindness (New York, NY). The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Dr. Michelle Hribar, PhD, an editor of this journal, was recused from the peer-review process of this article and had no access to information regarding its peer-review.

HUMAN SUBJECTS: Human subjects were included in this study.

This study adhered to the Declaration of Helsinki, and Oregon Health and Science University’s Institutional Review Board approved the study. A requirement for participants' consent was waived by the Institutional Review Board.

No animal subjects were used in this study.

Author Contributions:

Conception and design: Hwang.

Data collection: Hwang.

Analysis and interpretation: Hwang, Hribar, Chen, Thomas, White.

Obtained funding: N/A

Overall responsibility: Hwang, Hribar, Chen, Thomas, White.

References

  • 1.Cheng C.Y., Soh Z.D., Majithia S., et al. Big data in ophthalmology. Asia Pac J Ophthalmol (Phila) 2020;9:291–298. doi: 10.1097/APO.0000000000000304. [DOI] [PubMed] [Google Scholar]
  • 2.Kotecha D., Asselbergs F.W., Achenbach S., et al. CODE-EHR best-practice framework for the use of structured electronic health-care records in clinical research. Lancet Digit Health. 2022;4:e757–e764. doi: 10.1016/S2589-7500(22)00151-0. [DOI] [PubMed] [Google Scholar]
  • 3.Saraswathula A., Merck S.J., Bai G., et al. The volume and cost of quality metric reporting. JAMA. 2023;329:1840–1847. doi: 10.1001/jama.2023.7271. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Kahn M.G., Callahan T.J., Barnard J., et al. A harmonized data quality assessment terminology and framework for the secondary use of electronic health record data. EGEMS (Wash DC) 2016;4:1244. doi: 10.13063/2327-9214.1244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Wittenborn J.S., Lee A.Y., Lundeen E.A., et al. Validity of administrative claims and electronic health registry data from a single practice for eye health surveillance. JAMA Ophthalmol. 2023;141:534–541. doi: 10.1001/jamaophthalmol.2023.1263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Mahmoudi E., Kamdar N., Kim N., et al. Use of electronic medical records in development and validation of risk prediction models of hospital readmission: systematic review. BMJ. 2020;369:m958. doi: 10.1136/bmj.m958. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Ashfaq H.A., Lester C.A., Ballouz D., et al. Medication accuracy in electronic health records for microbial keratitis. JAMA Ophthalmol. 2019;137:929–931. doi: 10.1001/jamaophthalmol.2019.1444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Boland M.V. Assessing the quality of big data is critical as the stakes increase. JAMA Ophthalmol. 2023;141:541–542. doi: 10.1001/jamaophthalmol.2023.1561. [DOI] [PubMed] [Google Scholar]
  • 9.Li R.C., Garg T., Cun T., et al. Impact of problem-based charting on the utilization and accuracy of the electronic problem list. J Am Med Inform Assoc. 2018;25:548–554. doi: 10.1093/jamia/ocx154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.ICD-10-CM Official Guidelines for Coding and Reporting FY 2022 -- UPDATED April 1, 2022 (October 1, 2021 - September 30, 2022). 202.
  • 11.Rosenbloom S.T., Crow A.N., Blackford J.U., Johnson K.B. Cognitive factors influencing perceptions of clinical documentation tools. J Biomed Inform. 2007;40:106–113. doi: 10.1016/j.jbi.2006.06.006. [DOI] [PubMed] [Google Scholar]
  • 12.Weis J.M., Levy P.C. Copy, paste, and cloned notes in electronic health records. Chest. 2014;145:632–638. doi: 10.1378/chest.13-0886. [DOI] [PubMed] [Google Scholar]
  • 13.Apathy N.C., Hare A.J., Fendrich S., Cross D.A. I had not time to make it shorter: an exploratory analysis of how physicians reduce note length and time in notes. J Am Med Inform Assoc. 2023;30:355–360. doi: 10.1093/jamia/ocac211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Weed L.L. Medical records that guide and teach. N Engl J Med. 1968;278:593–600. doi: 10.1056/NEJM196803142781105. [DOI] [PubMed] [Google Scholar]
  • 15.Chowdhry S.M., Mishuris R.G., Mann D. Problem-oriented charting: a review. Int J Med Inform. 2017;103:95–102. doi: 10.1016/j.ijmedinf.2017.04.016. [DOI] [PubMed] [Google Scholar]
  • 16.Committee OTLHCSIA, Institute OM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. 2013. [PubMed]
  • 17.Kroth P.J., Morioka-Douglas N., Veres S., et al. Association of electronic health record design and use factors with clinician stress and burnout. JAMA Netw Open. 2019;2 doi: 10.1001/jamanetworkopen.2019.9609. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Wright E., Darer J., Tang X., et al. Sharing physician notes through an electronic portal is associated with improved medication adherence: quasi-experimental study. J Med Internet Res. 2015;17:e226. doi: 10.2196/jmir.4872. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Joukes E., Abu-Hanna A., Cornet R., de Keizer N.F. Time spent on dedicated patient care and documentation tasks before and after the introduction of a structured and standardized electronic health record. Appl Clin Inform. 2018;9:46–53. doi: 10.1055/s-0037-1615747. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Ophthalmology Science are provided here courtesy of Elsevier

RESOURCES