Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2016 Jun 6;24(1):123–129. doi: 10.1093/jamia/ocw064

An electronic documentation system improves the quality of admission notes: a randomized trial

Trevor Jamieson 1,2,3,, Jonathan Ailon 1,3, Vince Chien 1,3, Ophyr Mourad 1,3
PMCID: PMC7654086  PMID: 27274016

Abstract

Objective: There are concerns that structured electronic documentation systems can limit expressivity and encourage long and unreadable notes. We assessed the impact of an electronic clinical documentation system on the quality of admission notes for patients admitted to a general medical unit.

Methods: This was a prospective randomized crossover study comparing handwritten paper notes to electronic notes on different patients by the same author, generated using a semistructured electronic admission documentation system over a 2-month period in 2014. The setting was a 4-team, 80-bed general internal medicine clinical teaching unit at a large urban academic hospital. The quality of clinical documentation was assessed using the QNOTE instrument (best possible score = 100), and word counts were assessed for free-text sections of notes.

Results: Twenty-one electronic-paper note pairs (42 notes) written by 21 authors were randomly drawn from a pool of 303 eligible notes. Overall note quality was significantly higher in electronic vs paper notes (mean 90 vs 69, P < .0001). The quality of free-text subsections (History of Present Illness and Impression and Plan) was significantly higher in the electronic vs paper notes (mean 93 vs 78, P < .0001; and 89 vs 77, P = .001, respectively). The History of Present Illness subsection was significantly longer in electronic vs paper notes (mean 172.4 vs 92.4 words, P = .0001).

Conclusions: An electronic admission documentation system improved both the quality of free-text content and the overall quality of admission notes. Authors wrote more in the free-text sections of electronic documents as compared to paper versions.

Keywords: electronic health records, health care quality, QNOTE, clinical documentation, note quality

BACKGROUND AND SIGNIFICANCE

Electronic health records (EHRs) are viewed as key components of high-quality and effective health care systems of the future.1–5 A quality improvement advantage of EHRs over traditional paper-based systems is the automated generation of data for secondary analyses,2,5,6 an advantage that has prompted many builders and purchasers of electronic systems to emphasize structured documentation over unstructured narratives. However, there are concerns that structured data entry results in documentation systems that limit expressivity, interfere with workflow, and generate excessively long and unreadable notes.6–9 Recent policy statements from the American Medical Informatics Association and the American College of Physicians echo these shortcomings of EHRs, noting that improving the quality and communicative potential of documentation is key to realizing the potential of future electronic systems.6,10

While many assessments of note quality have focused primarily on the presence or absence of specific data in the documentation,1,11–14 the QNOTE tool allows for an assessment that balances the existence of relevant data and how it is presented.15 QNOTE is a scoring rubric for the quality of documentation derived through a multistakeholder process that places strong emphasis on reader-centric characteristics like clarity, completeness, and organization.16 It was validated in a population of ambulatory patients with type 2 diabetes.15

Preprinted and structured paper-based admission documentation has been found to increase the recording of relevant clinical details while also creating shorter notes.17 To the best of our knowledge, however, no prospective evaluations of the impact of electronic systems on the quality of hospital admission documentation have been conducted. In particular, no evaluations have used an assessment metric like QNOTE that accounts for nuances of narrative, such as author-to-author expressivity.

OBJECTIVE

The goal of this randomized crossover study was to determine if the use of a semistructured electronic documentation system, compared to handwritten paper documentation, had an impact on the quality of admission notes written by the notes’ primary authors on a general internal medicine service at an academic hospital. Documentation was assessed using QNOTE.15

METHODS

Study design

This study used a prospective randomized crossover design to compare the quality of paper admission notes to that of admission notes generated by the same author, on different patients, using an electronic documentation system. An assessment of the reliability of the scoring rubric, QNOTE, in the inpatient setting was also done across 8 independent physician raters.

Setting

The study was conducted in an 80-bed inpatient academic general internal medicine (GIM) unit in Toronto, Ontario. The GIM unit had 4 teams that admitted patients from 12:00 to 8:00 on weekdays and all day on weekends. Each admitting team consisted of 1 senior resident, who supervised a group of junior residents and medical students who had primary responsibility for completing new patient admissions. Each junior resident or medical student was on call approximately every fourth day and admitted 1–3 patients per call.

Randomization

During each 4-week resident block, 2 admitting teams were randomly selected to complete paper admission notes for 2 weeks, followed by electronic admission notes for 2 weeks. The other 2 admitting teams were randomly selected to complete electronic admission notes for 2 weeks, followed by paper admission notes for 2 weeks.

Admission notes were eligible for inclusion in the study if the author of the note had completed at least 1 paper note and at least 1 electronic note during the same 4-week block. For each such author, 1 paper note and 1 electronic note were randomly selected using a random number generator (Excel for Mac 2011).

Population

Admission notes generated by primary authors on the 4 teaching teams were eligible for inclusion in this study. These notes were the primary authors’ work alone and had not yet been annotated or modified by supervising senior trainees or attending physicians. Admission notes generated by physicians on a fifth nonteaching team that admitted patients from 8:00 to 12:00 on weekdays were excluded. At times, the nonteaching service would be full or unavailable; notes completed between 8:00 and 12:00 by a teaching service in these circumstances were still included.

In addition, admission notes were excluded from the analysis if they were written by an individual who did not receive training on the electronic system, by triaging senior residents in the emergency department, by attending physicians, during an electronic system downtime, or on patients transferred to GIM from another service, such as the intensive care unit. Admission notes written by senior triaging residents or attending physicians at our institution are rare and typically written during periods of extreme workload. During the trial period, triaging senior residents and attending physicians were given the option of continuing to use paper temporarily while they indirectly became acquainted with the electronic system.

Residents rotated on GIM in 4-week blocks, and medical students rotated in 8-week blocks. The study was completed over an 8-week period between May 5, 2014, and June 29, 2014, which comprised 2 consecutive 4-week resident rotation blocks.

Residents and medical students who rotated on GIM during the study period were asked to complete written informed consent; notes written by those who consented were included in the review and analysis. The St Michael’s Hospital Research Ethics Board approved the study.

Intervention

At the beginning of the 2014 academic cycle, the hospital implemented a fully electronic admission documentation system on the GIM unit. Admission notes were semistructured, with typed data input. Two sections were entirely free text: History of Present Illness (HPI) and Impression and Plan (I&P). The other sections were semistructured, allowing for structured input through pick-lists and other clickable options combined with free-text annotation. The most highly structured section was the medication list, designed to act concurrently as an admission medication reconciliation document.

Outcome assessment

QNOTE

QNOTE is a scoring rubric for the quality of clinical documentation.15 An assessor judges a note on the attributes of clarity, completeness, brevity, currency, organization, prioritization, and sufficiency of information. A reviewer assesses a subset of these attributes for each of the note’s 12 primary sections (Chief Complaint, History of Present Illness, Problem List, Past Medical History, Medications, Allergies, Social and Family History, Review of Systems, Physical Examination, Assessment, Plan of Care, and Follow-up) as absent, not acceptable, partially acceptable, or fully acceptable. These 4 outcomes are scored as 0 points, 0 points, 50 points, or 100 points, respectively, with the overall score of a section being the average of the attribute scores for that section. The overall score of the note is the average score of the 12 required sections, scaled to a maximum value of 100.

The QNOTE instrument’s scoring components were initially derived through a rigorous multistakeholder process that involved clinicians, nursing staff, administrators, and patients from 2 tertiary medical centers, 2 community hospitals, and 2 ambulatory clinics.16 These parties participated in focus groups and interviews with purposeful sampling to achieve maximal diversity of participants as well as saturation of codes and themes. The goal was a general-purpose scoring rubric with applicability across ambulatory and inpatient settings and diverse health care conditions. The tool was validated in a study of 300 clinical notes in an ambulatory population of patients with type 2 diabetes against a global score.15 However, neither QNOTE nor the global rating instrument (a simple “unacceptable,” “partially acceptable,” or “fully acceptable” scale) were disease- or setting-specific. As the tool was derived in large part in a setting similar to ours, and as the validation was against a similarly general tool, we are confident that the tool is directly applicable to our population.

A key advantage of QNOTE is that it scores subsections individually, allowing for the differential assessment of free-text sections apart from the overall note. This is not possible with global scoring rubrics, such as the PDQI-9.18

Note transcription

To reduce potential reviewer bias due to handwriting or layout, all notes were transcribed into a common word-processing format. Headings were written in all caps, and text was formatted in 11-point Arial font. The reformatting was done in order to (1) de-identify both the authors and the patients and (2) remove the potential rater- associated biases that could arise due to poor handwriting or layout choices in the documents.

All notes were de-identified as to author, date and time of admission, and all patient-identifying data, including the dates of procedures and names of treating physicians and facilities. All notes were transcribed using consistent white space rules that included double line spacing after headings, subheadings, and paragraphs.

For the electronic notes, computer-generated headers were removed and highly structured parts of the note were converted into phrases using a prespecified schema: eg, a radio button providing a present/absent option for “ascites” would be converted into either “No ascites” or “Ascites present.”

For the paper notes, hand-drawn symbols that could not be created with a maximum 2-key combination were replaced with the most appropriate word in context: eg, “↑’ing” could be replaced by “increasing.” Paper notes were ordered assuming a top-to-bottom, left-column-to-right-column order unless, through the use of dividers, the author had implied another order.

Investigations that involved a symbolic representation of laboratory values were recreated using MS PowerPoint for Mac 2011, without any annotation or explanation.

No attempt was made to reorder sections or rename section headings. Bullets, abbreviations, and other structural marks were maintained as written. Other than the replacement of symbols as above, no author-generated content was modified in any way.

A single researcher transcribed the notes. During the transcription process, 23 phrases from the paper notes were extremely difficult to read. These phrases were examined independently by a second researcher, resulting in a consensus reading for 22 of the 23 phrases. One phrase could not be deciphered and was removed from the note, but this deletion was not felt to change the overall meaning of the note.

At the time of transcription, information available in the notes was used to calculate the patients’ Charlson and age-adjusted Charlson comorbidity indices19 so that the severity of illness of patients in respective groups could be compared. The number of total admissions during the same day as the studied note was captured as a measure of workload. Basic demographic information on the patients (age, gender) and on the notes (weekend vs weekday, time of day) was gathered to look for systemic differences.

Outcomes

The primary outcome of the study was the quality of the documentation as measured by the QNOTE instrument on a 100-point scale. Secondary outcomes included the quality (as a subjective measurement) and length (as an objective measurement) of 2 free-text sections of the notes: HPI and I&P. Beyond being a more objective measure, length also gives the opportunity to assess how authors behave with an electronic vs paper medium when allowed the freedom to be either more concise, which should allow for faster documentation, or more expansive, which should allow for more comprehensive documentation.

In the paper notes, the HPI was defined as any text included after a heading obviously marking that section up to and excluding the next heading. The I&P section was variably named, but included all text, not including headings, related to the summation of the case and any differential diagnosis and plans. The quality of the HPI was defined as the average score for QNOTE section 2 (“History of Present Illness (HPI)”). The quality of the I&P was defined as the average of the average scores of QNOTE sections 10 through 12 (“Assessment,” “Plan of Care,” and “Follow-up Information”). Both scores were again out of 100 points.

Section lengths were measured using the word-count function in MS Word for Mac 2011.

Evaluation and analysis

Eight physician raters (3 general internists and 5 internal medicine subspecialists) reviewed all notes. Each rater received the notes in a random order and graded them using the QNOTE instrument. As in the original QNOTE study, no training on the use of the instrument was given to the raters.

All quality measures and section lengths were analyzed using paired t-tests. The P-value required for statistical significance was set at a level of 0.01 (Bonferroni correction for 1 primary and 4 secondary outcomes). Interrater reliability was measured using the average intraclass correlation (ICC) for consistency. Statistical analyses were completed in Stata 12.1 for Mac.

Using the test characteristics of the QNOTE instrument as defined in a study by Burke,15 we set a lower threshold of 12 pairs of notes, or 24 notes, for our study to be adequately powered to detect an increase in 10 points with the QNOTE instrument (population mean 65 points, alternate mean 75 points, standard deviation = 8.84, 2-sided alpha 0.01, beta 0.90, single sample power calculation). There is no reference standard for what is a clinically meaningful difference in points using the QNOTE scale; 10 points, or 10% of the overall possible score, was considered a reasonable difference that would relate to a true difference in quality to a reader.

RESULTS

A total of 564 admissions to GIM were identified during the study period, of which 303 notes by 21 authors were available for analysis (Figure 1). The authors consisted of 14 first-year residents and 7 third-year medical students. For each author, 1 paper note and 1 electronic note were selected, yielding a total of 42 transcribed and evaluated notes.

Figure 1.

Figure 1.

Summary of Assessed Notes and Reasons for Exclusion.

There were no significant differences in the following characteristics of the electronic and paper notes: day of the week of the admission, time of day of the admission, and total number of admissions on the day the note was written (Table 1). There was also no significant difference in the percentage of authors whose first note was paper vs electronic. As well, there was no difference in major patient characteristics including gender, age, and comorbidity as assessed by the Charlson and age-adjusted Charlson comorbidity indices.19

Table 1.

Baseline characteristics of analyzed notes (all confidence intervals 95%)

Paper Electronic P-value
Note Characteristics
Day of week (# notes)
 Monday to Friday 12 13 .75
 Saturday to Sunday 9 8
Time of Admission (# notes)
 0800–1600 3 2 .77
 1600–2400 12 11
 2400–0800 6 8
Admissions to GIM same day (#) 11 (9–12) 11 (10–12) .72
Author’s first note (%) 43 (22–64) 57 (36–78) .36
Patient Demographics
Sex (% male) 52 (31–73) 43 (22–64) .56
Age (years) 67.3 (59.0–75.7) 63.8 (54.6–73.0) .56
Charlson comorbidity index (CCI)19 2.4 (1.5–3.3) 2.9 (2.1–3.7) .47
Age-adjusted CCI19 4.8 (3.3–6.2) 5.1 (3.8–6.4) .72

Local Reliability of QNOTE

The average ICC was 0.93 for the overall quality measure (average ICC, 2-way random effects, consistency of agreement, 42 targets, 8 judges, 95% CI, 0.90-0.96), indicating a very high level of agreement across the 8 independent raters. Average ICC for the HPI subsection quality was 0.78 (0.66-0.87) and for the I&P was 0.78 (0.67-0.87).

The ICCs for our paper and electronic notes show a general tendency to be lower than in the Burke study,15 but with overlapping confidence intervals in all instances (Table 2). The one exception is the overall paper assessment, where the ICC in our study was higher (Table 2).

Table 2.

Comparison of ICCs for the HPI, I&P, and overall scores in this study as compared to the prior validation of QNOTE by Burke15

QNOTE Section Paper (this study) Paper (Burke et al.)15 Electronic (this study) Electronic (Burke et al.)15
HPI 0.72 (0.50-0.87) 0.78 (0.69-0.87) 0.61 (0.30-0.82) 0.87 (0.78-0.96)
I&P 0.76 (0.56-0.89) N/A 0.74 (0.53-0.880 N/A
Assessment 0.71 (0.47-0.86) 0.81 (0.72-0.91) 0.49 (0.084-0.76) 0.79 (0.62-0.96)
Plan of Care 0.72 (0.50-0.87) 0.86 (0.78-0.94) 0.62 (0.32-0.82) 0.81 (0.65-0.96)
Follow-up Information 0.52 (0.12-0.77) 0.75 (0.65-0.85) 0.72 (0.48-0.87) 0.74 (0.48-1.00)
Overall 0.92 (0.86-0.96) 0.79 (0.75-0.84) 0.68 (0.43-0.85) 0.80 (0.75-0.86)

Quality and length of clinical documentation

Electronic notes were judged to be of significantly higher quality than their paper counterparts [90 (86-93) vs 69 (61-77) points; 99% CIs; P < .0001] (Table 3). The HPI was additionally judged to be of significantly higher quality [93 (89-98) vs 78 (70-85) points; 99% CIs; P < .0001] and was 1.9 times longer [172.4 (122.7-222.0) vs 92.4 (69.6-115.2) words; 99% CIs; P = .0001] (Table 3). The I&P was also of significantly higher quality [89 (84-94) vs 77 (70-84) points; 99% CIs; P = .0012] and was 1.3 times longer [140.4 (114.7-166.1) vs 105.5 (77.2-133.7) words; 99% CIs; P = .037] although this latter finding did not meet our test of significance corrected for 5 measured outcomes.

Table 3.

Analysis of overall, HPI, and I&P quality and length of HPI and I&P

Paper Note, Mean (99% CI) Electronic Note, Mean (99% CI) P-value
Primary Outcome
Overall Quality (QNOTE) 69 (61-77) 90 (86-93) <.0001
Secondary Outcomes
HPI Quality (QNOTE) 78 (70-85) 93 (89-98) <.0001
I&P Quality (QNOTE) 77 (70-84) 89 (84-94) .0012
HPI Length (words) 92.4 (69.6-115.2) 172.4 (122.7-222.0) .0001
I&P Length (words) 105.5 (77.2-133.7) 140.4 (114.7-166.1) .037

A full section-by-section summary of the quality assessments shows significantly better quality across most sections of the note, except the chief complaint, the problem list, and the follow-up plan (Table 4).

Table 4.

Full section-by-section quality assessment (QNOTE score) of paper and electronic notes

QNOTE Section Paper Electronic P-value
1 Chief Complaint(s) 57 73 .053
2 HPI 78 93 <.0001
3 Problem (List) 87 91 .17
4 Past Medical History 85 93 .0002
5 Medications (List) 60 97 .0002
6 Adverse Drug Reactions and Allergies 66 87 .0025
7 Social and Family History 71 94 .0019
8 Review of Systems 33 88 <.0001
9 Physical Findings 77 96 <.0001
10 Assessment 80 95 .0001
11 Plan of Care 83 93 .0016
12 Follow-up Information 69 78 .0346
10-12 I&P 77 89 .0012
Overall 69 90 <.0001

Regression analyses relating HPI length to HPI quality, I&P length to I&P quality, and the combination of HPI and I&P lengths with overall quality show length to be significantly positively associated with quality. However, the absolute impact on quality scores is modest (Table 5).

Table 5.

Regression analysis of free-text section length as a correlate to free-text section and overall quality

Dependent Variable Independent Variable(s) Coefficient Standard Error P > |t | Adjusted R2
HPI Quality HPI Length 0.073 0.025 .005 0.16
I&P Quality I&P Length 0.16 0.031 <.001 0.38
Overall Quality HPI Length 0.068 0.025 .009 0.41
I&P Length 0.14 0.039 .001

DISCUSSION

This prospective randomized trial demonstrates an improvement, as measured by the QNOTE tool, in the quality of admission notes when authors use a computer-based documentation system compared to writing paper notes. Most sections of the note, including the unstructured free-text sections HPI and I&P, improved in quality when compared to paper notes by the same author. A significant increase in word count in the electronic free-text sections was also observed, although this related only modestly to the overall improvement in the quality of those sections.

Our study confirms and augments the retrospective findings of Burke and colleagues,20 who also used the QNOTE instrument in a 2015 study.

Using eye-tracking technology, a recent study found that most sections of electronic documentation were systematically ignored by readers with the exception of the Impression and Plan, and that more than 90% of all of the content in verbal patient handoffs was found in this same section.21 Our electronic documentation system improved the quality of this important section, and the History of Present Illness, while allowing the author complete freedom over structure and content. Thus, the imposition of structure is not a prerequisite to improvement in quality. Future studies should specifically assess the quality of structured vs free-text HPI and impression sections.

That the same author would write more when given the freedom to write less in an unrestricted free-text section is surprising. We assumed, given the previously noted workflow challenges with structured documentation,6,10 that authors would use this freedom to write less and make up for temporal inefficiencies elsewhere in the note. Our free-text documentation had no features, such as macros allowing the import or prepopulation of text, that would increase the efficiency with which authors produce such text, so this does not account for the discrepancy. Multiple studies concerning the compositional process and the impact of word-processing technology on essay composition, however, confirm the tendency of authors to write more when typing vs writing by hand.22–25 This may in part be due to the tendency of typed prose to appear shorter than its handwritten counterpart.26

This increase in the quantity of documentation in narrative sections was associated with a higher quality of that documentation, which may relate, in part, to the increased clarity and completeness that these extra words and phrases bring to a history. Our analysis, while showing a strong correlation between length and quality, does not show a large actual impact on quality due to length; nearly doubling the HPI from 92 to 172 words, for example, given a regression coefficient of 0.073, would increase the perceived quality of that section less than 6 points out of 100 on average. Therefore, factors other than length account for the higher quality of electronic notes.

Authors of electronic documentation approach a writing task differently when they are typing vs writing by hand.27 In general, the composition of an admission note, which is comparable to the writing of an essay, can be viewed as a recursive activity with multiple processes: prewriting, writing, and revising.28,29 These processes are moderated by the individual and by the task environment, of which the composing medium is a core feature. Due to the real-time recursive nature of typed vs written text, authors tend to revise at a higher level (paragraph/overall structure vs word/phrase) when typing.22,30 As clinicians are often concurrently synthesizing and writing, a greater capacity for high-level revisions of a clinical document would be expected to lead to a higher-quality document. While essay writers often accommodate for the inflexibility of the handwriting medium by spending an increased amount of time on the thinking and planning (prewriting) phase,30 this approach is not feasible in the highly interrupted and busy clinical environment.31,32 Thus, the electronic entry of documentation likely has an impact not only on clinical workflow but also on the cognitive processes underlying that documentation and ultimately the resulting quality of the documentation. This is a key area for future study.

Limitations

This study has a number of limitations.

The improvements in free-text sections of our document may be a consequence of systemic bias across all raters, but the observation that there was an objective increase in the word count written by the same author across the free-text sections counters this notion. Certainly authors approached the tasks differently.

Although raters were not aware of whether a note was produced by handwritten or electronic means, the obvious structural similarities between the different electronic notes makes it essentially impossible to completely blind the raters to treatment. We mitigated this by having each rater assess notes in a random order, and we believe that a high ICC across independent raters reduces, but does not eliminate, the possibility of systemic bias in favor of the electronic versions.

The QNOTE tool has not previously been used in an inpatient setting, its only prior validation being in an ambulatory population. However, the tool was developed for more general usage. The majority of participants used in the derivation of QNOTE were drawn from a general population of tertiary care and community hospitals,16 providing strong face validity for the generalization to inpatient general medical units.

The reliability of QNOTE in our study showed a trend toward being slightly lower than that seen in the validation study by Burke and colleagues.15 By averaging ratings across 8 raters, this issue is significantly mitigated. More than 50% of our raters were specialists, whereas in the original validation, the raters were exclusively generalists (general internists and family physicians). It is possible that specialists have different expectations of the content of a note, especially if the note pertains to their domain of practice.

There may be significant quality lapses that our instrument did not capture. For example, advance directives or “code status,” or even process-oriented features like contingency plans, could be considered core features of any admission assessment for a GIM population, but are not captured in QNOTE. It is important that assessments of documentation quality account for both the “communication-based” quality as is captured in this study and, for items that are considered necessities, a more structural “present/absent” assessment of quality.

Our study does not address whether documentation quality correlates with quality of care. There are studies assessing the impact of paper- and computer-based documentation processes on care quality, but these studies do not evaluate the quality of the documentation as an intermediate process in that care.12,33

This study does not generalize well to notes in which the bulk of narrative documentation has been automatically generated by macros, imported electronically, or copied and pasted from other sources. In particular, one cannot assume that our finding that length was positively associated with quality, or even that electronic versions are superior to paper versions, would hold in an environment where this type of facilitated documentation may lead to “note bloat.”6 In such settings, it would be worth studying how the use of these tools impacts documentation quality specifically.

Our study does not address any differences that might exist between dictated and electronic data entry. A 2013 study comparing the methods of documentation to outcomes in care found that on none of the studied quality metrics were physicians who dictated superior to those who used either structured electronic or free-text typed notes.34 However, this study similarly failed to compare the quality of the documentation as an intermediary.

Our study was unable to capture the process of documentation and how an electronic system creates new opportunities or barriers. A 2014 study in pediatric ICUs found that note quality was significantly worsened when the author was caring for sicker patients or writing more notes on any given day.35 Additionally, whether such a system increases the time to complete documentation is not known, as accurate time-stamp data for paper notes is unavailable. Eighty-four percent of our admissions occurred between 16:00 and 8:00. More advanced time-and-motion or other workflow analyses, and how variance in workflows impacts the quality of documentation, is an area for further study.

Finally, our study results were obtained in learners and may not be generalizable to the broader population of more experienced clinicians or to nonacademic sites.

CONCLUSIONS

This prospective randomized study confirms that an electronic clinical documentation system improves note quality compared to handwritten documentation. Both the overall quality of documentation and the quality of free-text components of the documentation improved significantly. A significant increase in the quantity of documentation, while not completely explaining the increase in quality, confirms a different approach to free-text documentation when written or typed. These findings suggest that gains in quality associated with electronic medical records are not entirely the result of the automated imposition of structure. How an author cognitively approaches such a task using the 2 media and how the external environment and clinical workflow impact on the quality of clinical documentation are key areas for further study.

ACKNOWLEDGEMENTS

Kaveh Shojania, Chaim Bell and Stephen Hwang provided advice on earlier drafts of this paper. Kathlyn Babaran-Henfrey and James Kitchens helped gather informed consent. Special thanks to Anne Trafford, Frank Garcea, Michael Freeman, Corinne Arnott, Grace Zuo and the many others in the St. Michael's Hospital Information Technology team who helped to make this application a reality.

CONTRIBUTIONS

All authors contributed to writing and reviewing the manuscript.

FUNDING

None.

COMPETING INTEREST

None.

REFERENCES

  • 1. Soto CM, Kleinman KP, Simon SR. Quality and correlates of medical record documentation in the ambulatory care setting. BMC Health Serv Res. 2002;2(22):22–29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Johnson KB, Ravich WJ, Cowan JA., Jr Brainstorming about next-generation computer-based documentation: an AMIA clinical working group survey. Int J Med Inform. 2004;73(9-10):665–674. [DOI] [PubMed] [Google Scholar]
  • 3. Institute of Medicine Committee on Data Standards for Patient Safety. Key capabilities of an electronic health record system. National Academies Press; 2003. www.nap.edu/catalog/10781/key-capabilities-of-an-electro nic-health-record-system-letter-report. Accessed October 13, 2015. [Google Scholar]
  • 4. Institute of Medicine Committee on Quality of Healthcare in America. To Err Is Human: Building a Safer Health System. National Academies Press; 2000. www.nap.edu/read/9728/chapter/1. Accessed October 13, 2015. [Google Scholar]
  • 5. Miller RH, Sim I. Physicians' use of electronic medical records: barriers and solutions. Health Aff (Millwood). 2004;23(2):116–126. [DOI] [PubMed] [Google Scholar]
  • 6. Kuhn T, Basch P, Barr M, et al. Medical Informatics Committee of the American College of Physicians. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians. Ann Intern Med. 2015;162(4):301–303. [DOI] [PubMed] [Google Scholar]
  • 7. Embi PJ, Weir C, Efthimiadis EN, et al. Computerized provider documentation: findings and implications of a multisite study of clinicians and administrators. J Am Med Inform Assoc. 2013;20(4):718–726. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Cimino JJ. Improving the electronic health record—are clinicians getting what they wished for? JAMA. 2013;309(10):991–992. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Rosenbloom ST, Denny JC, Xu H, et al. Data from clinical notes: a perspective on the tension between structure and flexible documentation. J Am Med Inform Assoc. 2011;18(2):181–186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Cusack CM, Hripcsak G, Bloomrosen M, et al. The future state of clinical data capture and documentation: a report from AMIA's 2011 Policy Meeting. J Am Med Inform Assoc. 2013;20(1):134–140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Stengel D, Bauwens K, Walter M, et al. Comparison of handheld computer-assisted and conventional paper chart documentation of medical records. A randomized, controlled trial. J Bone Joint Surg Am. 2004;86-A(3):553–560. [DOI] [PubMed] [Google Scholar]
  • 12. Callen J, McIntosh J, Li J. Accuracy of medication documentation in hospital discharge summaries: a retrospective analysis of medication transcription errors in manual and electronic discharge summaries. Int J Med Inform. 2010;79(1):58–64. [DOI] [PubMed] [Google Scholar]
  • 13. Gunningberg L, Fogelberg-Dahm M, Ehrenberg A. Improved quality and comprehensiveness in nursing documentation of pressure ulcers after implementing an electronic health record in hospital care. J Clin Nurs. 2009;18(11):1557–1564. [DOI] [PubMed] [Google Scholar]
  • 14. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? N Engl J Med. 2010;362(12):1066–1069. [DOI] [PubMed] [Google Scholar]
  • 15. Burke HB, Hoang A, Becher D, et al. QNOTE: an instrument for measuring the quality of EHR clinical notes. J Am Med Inform Assoc. 2014;21(5):910–916. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Hanson JL, Stephens MB, Pangaro LN, et al. Quality of outpatient clinical notes: a stakeholder definition derived through qualitative research. BMC Health Serv Res. 2012;12:407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Goodyear HM, Lloyd BW. Can admission notes be improved by using preprinted assessment sheets?. Qual Health Care. 1995;4(3): 190–193. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing electronic note quality using the physician documentation quality instrument (PDQI-9). Appl Clin Inform. 2012;3(2):164–174. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Charlson M, Szatrowski TP, Peterson J, et al. Validation of a combined comorbidity index. J Clin Epidemiol. 1994;47(11):1245–1251. [DOI] [PubMed] [Google Scholar]
  • 20. Burke HB, Sessums LL, Hoang A, et al. Electronic health records improve clinical note quality. J Am Med Inform Assoc. 2015;22(1):199–205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Brown PJ, Marquard JL, Amster B, et al. What do physicians read (and ignore) in electronic progress notes? Appl Clin Inform. 2014;5(2): 430–444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Lee Y-J. A comparison of composing processes and written products in timed-essay tests across paper-and-pencil and computer modes. Assessing Writing. 2002;8(2):135–157. [Google Scholar]
  • 23. Wolfe E, Bolton S, Feltovich B, et al. A Comparison of word-processed and handwritten essays from a standardized writing assessment. ACT Research Report Series 93-8. 1993. www.act.org/research/researchers/reports/pdf/ACT_RR93-08.pdf. Accessed October 13, 2015. [Google Scholar]
  • 24. Wolfe EW, Bolton S, Feltovich B, et al. The influence of student experience with word processors on the quality of essays written for a direct writing assessment. Assessing Writing. 1996;3(2):123–147. [Google Scholar]
  • 25. Russell M, Haney W. Testing writing on computers. Education Policy Analysis Archives. 1997;5:3 www.epaa.asu.edu/ojs/article/view/604. Accessed October 13, 2015. [Google Scholar]
  • 26. Sweedler-Brown CO. Computers and assessment: the effect of typing versus handwriting on the holistic scoring of essays. Res Teaching Dev Educ. 1991;8(1):5–14. www.jstor.org/stable/42801814. Accessed October 13, 2015. [Google Scholar]
  • 27. Mamikyna L, Vawdrey DK, Stetson PD, et al. Clinical documentation: composition or synthesis? J Am Med Inform Assoc. 2012;19(6):1025–1031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Sommers N. Revision Strategies of student writers and experienced adult writers. Coll Compos Commun. 1980;31(4):378–387. [Google Scholar]
  • 29. Flower L, Hayes JR. A cognitive process theory of writing. Coll Compos Commun. 1981;32(4):365–387. [Google Scholar]
  • 30. Cheung YL. Critical review of recent studies investigating effects of word processing-assisted writing and pen-and-paper writing on the quality of writing and higher level revisions. Procedia - Soc Behav Sci. 2012;46: 1047–1050. [Google Scholar]
  • 31. Weigl M, Müller A, Vincent C, et al. The association of workflow interruptions and hospital doctors' workload: a prospective observational study. BMJ Qual Saf. 2012;21(5):399–407. [DOI] [PubMed] [Google Scholar]
  • 32. Westbrook JI, Coiera E, Dunsmuir WT, et al. The impact of interruptions on clinical task completion. Qual Saf Health Care. 2010;19:284–289. [DOI] [PubMed] [Google Scholar]
  • 33. Linder JA, Ma J, Bates DW, et al. Electronic health record use and the quality of ambulatory care in the United States. Arch Intern Med. 2007;167(13): 1400–1405. [DOI] [PubMed] [Google Scholar]
  • 34. Linder JA, Schnipper JL, Middleton B. Method of electronic health record documentation and quality of primary care. J Am Med Inform Assoc. 2012;19(6): 1019–1024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Daphtary K. Computerized clinical documentation in the pediatric intensive care unit: quality of notes and factors that affect the quality. Scholar Archive. 2014. www.digitalcommons.ohsu.edu/etd/3591. Accessed Octo ber  13, 2015. [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES