Introduction
Medical record documentation is increasingly used to meet complex and prescriptive medico-legal, regulatory and reimbursement requirements. Although much attention has been given to problem-oriented documentation since Lawrence Weed first described it in 1968,1 appropriate documentation at all levels of the physician-patient encounter is essential and, at the same time, challenging for the busy healthcare provider. Electronic medical records (EMR) have been found to improve quality and efficiency in healthcare, enhancing monitoring of medication errors and adverse drug events.2 Medical research involving retrospective data review frequently uses EMR as a primary source. Also, hospital infection surveillance based on variables extracted from EMR has demonstrated excellent utility.3 In this regard, identifying cases of urinary tract infection (UTI) using EMR data requires not only objective but also subjective clinical data (i.e., signs or symptoms).4,5 The diagnostic criteria for UTI, one of the most common bacterial infections, require a positive urine culture and a compatible clinical picture. In this way, the probability of bladder infection exceeds 90% in women who experience dysuria and frequency without concurrent vaginal discharge or irritation.6 For these reasons, the evaluation of urinary tract symptoms as well as their proper documentation is crucial.
However, research in other diseases has found varying levels of agreement between symptom documentation in medical records and patient self-report.7–14 Usually, the healthcare provider will document fewer symptoms than the patient reports. To our knowledge, the correlation between medical record documentation and patient self-reporting of UTI symptoms is currently unknown. This study’s objective was to assess the level of agreement between patients’ self-reported UTI symptoms and those documented in their medical records by three distinct groups of healthcare providers.
Methods
We prospectively enrolled adult, hospitalized patients with Escherichia coli bacteriuria of greater than 50,000 colony forming units per milliliter of urine diagnosed during routine medical care (either present upon, or following hospital admission) between April 1, 2012 and February 28, 2013 at Barnes-Jewish Hospital, a 1250-bed tertiary care teaching hospital in St. Louis, Missouri. At the time of this study, admitting and treating physicians entered information on paper charts, while only the history and physical (H&P) and the discharge summary were subsequently dictated and transcribed to be part of the EMR; in contrast, emergency department personnel and nursing staff entered information directly into the EMR. During the study period, daily information on positive urine cultures for E. coli and corresponding patient lists were obtained via automated query of microbiology laboratory data. Patient charts were reviewed for the following exclusion criteria: 1) age less than 18 years old, 2) gross hematuria, 3) history of urologic malignancy or prostate cancer, 4) pregnancy, and 5) presence of a urinary catheter. Some of the exclusion criteria were chosen so as to remove non-infectious etiologies for urinary tract symptoms (gross hematuria may indicate an alternate etiology of symptoms, such as in cancer or nephrolithiasis). Within 24 hours of reported bacteriuria, we consented patients and conducted a self-report (SR) interview, using lay terminology, for the following signs and symptoms: fever, dysuria, frequency, retention, suprapubic pain, flank pain, chills, weakness, fatigue, dizziness, malodorous urine and confusion. Of these, we considered the first six to be primary UTI symptoms. We reviewed EMR for the documentation of UTI symptoms by three groups of healthcare providers: admitting/treating inpatient physicians (IP), inpatient nursing staff (RN) and, for patients admitted through the emergency room, emergency department physicians (ED). In addition, we reviewed IP paper documentation which was not transcribed into the EMR. To test for differences in the mean number of symptoms documented per source we used Wilcoxon matched-pairs test for non-parametric data. Positive and negative agreement were calculated between groups for each symptom, defined as self-report and EMR percentage agreement on the presence (positive agreement) or absence (negative agreement) of symptoms. The level of agreement between groups was assessed using Cohen’s kappa, a coefficient of agreement that corrects for chance. As a general guideline, a kappa of less than 0 indicates poor agreement, 0 – 0.2 slight agreement, 0.2 – 0.4 fair agreement, 0.4 – 0.6 moderate agreement, 0.6 – 0.8 substantial agreement, and 0.8 – 1.00 almost perfect agreement.9,15 Hospital-acquired UTI was defined as a UTI developing 48 hours or more after hospital admission. Data analysis was performed using SPSS 18 (SPSS Inc., Chicago, IL). The Human Research Protection Office at Washington University approved this study.
Results
A total of 43 patients were enrolled in the study. The median age at hospital admission was 61 years (range 22 – 93). Thirty-five (81%) were female and 27 (63%) were white. Twenty-seven patients (63%) were admitted through the emergency department and there were five hospital-acquired UTIs (11.6%). A diagnostic ICD-9 code for UTI was entered in the medical record of 35 (81.3%) patients and for pyelonephritis in 4 (9%). Thirty-four patients (79%) self-reported at least one of the six primary symptoms. The most common self-reported symptoms were urinary frequency (23 cases; 53.5%), retention (18 cases; 41.9%), flank pain, suprapubic pain and fatigue (all with 16 cases each; 37.2%), followed by dysuria (13 cases, 30.2%) as seen in Table 1. For nine patients in whom none of the six primary symptoms were self-reported, IP and ED records matched 100% with no symptoms documented in these patients’ charts, respectively; and despite the lack of symptoms, an ICD-9 code for UTI was entered for all nine patients upon discharge. Looking at all 12 symptoms we captured, a significantly higher number was reported by patients than was recorded in the medical record by IP, RN or ED [mean 2.5 (SD ±2.0) versus 0.7 (SD ±1.3) for IP, 0.2 (SD ±0.45) for RN, and 1.0 (SD ±1.4) for ED, respectively; Wilcoxon signed ranks test, p < 0.001]. Among five cases of hospital-acquired UTI (all symptomatic by SR), four had no symptoms documented in the medical record by IP or RN, but three out of these four were assigned ICD-9 codes for UTI upon discharge from the hospital.
Table 1.
Symptom | Self- report |
Medical record documentation by healthcare providers | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SR n=43 (%) |
IP n=43 (%) |
Positive agreement |
Negative agreement |
κ | ED n=27 (%) |
Positive agreement |
Negative agreement |
κ | RN n=43 (%) |
Positive agreement |
Negative agreement |
κ | |
Fever | (18.6) | 4 (7) | 0.07 | 0.79 | 0.4* | 3 (11.1) | 0.03 | 0.81 | 0.5* | 2 (4.7) | 0.04 | 0.81 | 0.3* |
Dysuria | 13 (30.2) | 6 (10.5) | 0.11 | 0.67 | 0.4* | 8 (29.6) | 0.18 | 0.55 | 0.4* | 1 (2.3) | 0.02 | 0.69 | 0.1 |
Frequency | 23 (53.5) | 5 (8.8) | 0.11 | 0.45 | 0.2* | 5 (18.5) | 0.18 | 0.37 | 0.2* | 4 (9.3) | 0.02 | 0.39 | −0.1 |
Retention¥ | 18 (48.9) | 1 (1.8) | 0.02 | 0.58 | 0.06 | 1 (3.7) | 0.03 | 0.59 | 0.1 | 0 | 0 | 0.58 | |
Suprapubic pain¥ | 16 (37.2) | 3 (5.3) | 0.09 | 0.6 | 0.2* | 1 (3.7) | 0.03 | 0.63 | 0.1 | 0 | 0 | 0.62 | |
Flank pain | 16 (37.2) | 5 (8.8) | 0.09 | 0.6 | 0.2* | 7 (25.9) | 0.14 | 0.48 | 0.2 | 1 (2.3) | 0.02 | 0.62 | 0.07 |
Chills¥ | 12 (27.9) | 4 (7) | 0.07 | 0.69 | 0.2* | 2 (7.4) | 0.07 | 0.77 | 0.4* | 0 | 0 | 0.72 | |
Weakness | 12 (27.9) | 2 (3.5) | 0.02 | 0.69 | 0.07 | 6 (22.2) | 0.07 | 0.59 | 0.09 | 2 (4.7) | 0.02 | 0.69 | 0.07 |
Fatigue¥ | 16 (37.2) | 3 (5.3) | 0.04 | 0.6 | 0.1 | 2 (7.4) | 0.07 | 0.63 | 0.2* | 0 | 0 | 0.62 | |
Dizziness¥ | 9 (20.9) | 3 (5.3) | 0.04 | 0.76 | 0.2* | 5 (18.5) | 0.11 | 0.7 | 0.4* | 0 | 0 | 0.79 | |
Malodorous urine¥ | 9 (20.9) | 1 (1.8) | 0.02 | 0.34 | 0.1* | 1 (3.7) | 0.03 | 0.77 | 0.2* | 0 | 0 | 0.79 | |
Confusion¥ | 4 (9.3) | 2 (3.5) | 0.02 | 0.88 | 0.2* | 3 (11.1) | 0.03 | 0.88 | 0.4* | 0 | 0 | 0.9 |
IP: inpatient physician, RN: nursing staff, ED: emergency department physician
p < 0.05
No measures of correlation were possible between SR and RN.
Symptoms agreement
Agreement between self-report and the three different groups of healthcare providers varied. In general, negative agreement was considerably higher than positive agreement. Negative agreement ranged from 0.47 to 0.88 between SR and IP, from 0.4 to 0.9 between SR and RN, and from 0.48 to 0.89 between SR and ED. Positive agreement (although still on a low level) was highest for dysuria and frequency between SR and ED (both 0.18) and between SR and IP (both 0.11). Table 1 shows the numbers and proportions of symptoms for each group (SR, MD, RN and ED) as well as the κ coefficient. There was slight to fair agreement between SR and medical record documentation either by either IP, RN or ED. Due to a lack of documentation, correlation between SR and RN was not possible for several symptoms (retention, suprapubic pain, chills, fatigue, dizziness, malodorous urine and confusion).
Discussion
In this study, we found low correlation between self-reported urinary symptoms and their documentation in medical records by three healthcare provider groups. Even when looking at those symptoms most specific for UTI, correlation between SR and IP (except for urinary retention) and between SR and ED (except for urinary retention and suprapubic pain) was disappointing. In all healthcare provider groups, low positive agreement rates and high negative agreement rates indicate an overall trend towards underdocumentation of symptoms in the medical records. Symptoms with relatively higher rates of positive agreement were dysuria (11%) and frequency (18%) by IP and ED, suggesting that physicians ask about these symptoms more frequently or are more prone to document them in the medical record when evaluating patients for UTI. Certainly, it is well-established that the presence of dysuria and frequency have good diagnostic utility, especially when combined with a positive dipstick result.16 Because symptoms other than dysuria and frequency lack specificity, there may be a tendency toward non-documentation of these in the medical record. An example is the non-specific symptom fatigue, which was reported by 37.2% of our patients but only documented in 4% of the IP notes and 7% of the ED notes. Other studies that included fatigue in conditions for which it lacks specificity, also reported poor correlation between self-report and medical records for this symptom.7,11,12
Our results can be interpreted in different ways. It is possible that a busy healthcare provider schedule (focusing patient interviews on the most relevant UTI symptoms), recall errors at the time of medical record documentation (due to delays between patient encounters and documentation, with possible interim encounters), dismissal of symptoms considered less relevant by providers, or over-reporting of symptoms by patients could have been responsible for the low correlations between self-report and corresponding documentation. Recall bias by patients may lead to fewer self-reported symptoms (when interviews happened long after the initial patient-provider encounter) or to more self-reported symptoms (resulting from thorough, dedicated interview in this research setting). Interview timing within 24 hours of diagnosis was intended to mitigate this time factor. Finally, patients’ healthcare literacy as well as particular circumstances during the provider-patient encounter may affect symptom reporting, appropriate documentation, or both. For nursing staff documentation, low correlation was expected because this group of healthcare providers is not required to document specific symptoms in the medical record and may do so only episodically. However, group comparison allowed us to realize how low agreement and correlation are for physician documentation, although they are expected to adhere to proper and complete documentation in the EMR.
For patients with hospital-acquired UTIs, a small patient subgroup in this study, we found that in most cases treating physicians did not document symptoms. Although this study did not include patients with catheter-related UTI, more research is required in that population as EMR data is increasingly used for surveillance. In previous studies, electronic surveillance algorithms for UTI frequently used fever as the only sign of infection in addition to laboratory data (i.e., urine culture).4,5,17 The current CDC National Healthcare Safety Network surveillance definitions of healthcare-associated symptomatic UTI require at least one sign or symptom (out of temperature >38.0°C, suprapubic tenderness, costovertebral angle pain or tenderness, urinary urgency, urinary frequency and dysuria).18 In this study, fever was not a predominant symptom in the medical records of patients with UTI and the correlation for fever as well as other symptoms was low. Therefore, electronic surveillance algorithms using fever alone or in combination with other urinary tract symptoms might be of limited value if documentation is poor. Improved understanding of these important issues is necessary as electronic surveillance becomes more common. Specifically, when developing new EMR-based electronic surveillance of UTIs, a pilot study to assess the frequency of UTI signs, symptoms and agreement between self-report and medical record documentation may be required.
Our study has some limitations in addition to its small sample size. This is a single center study at a large tertiary care center, with mixed use of EMR, and looking only at patients with E. coli, and results are therefore not generalizable. We did not distinguish between documentation in physicians’ paper charts and electronic records for the correlation between SR and IP; however, it is likely that documentation in either one alone had even lower rates of agreement.
In conclusion, there was fair to low correlation between self-report and healthcare providers’ documentation of symptoms of UTI. As medical records are a vital source of information to clinicians and researchers, and clinical data from EMR are increasingly being used for infection surveillance, strategies to improve documentation are needed.
Acknowledgments
Financial Support: JM was supported by the NIH CTSA (UL1RR024992) and was recipient of a KL2 Career Development Grant (KL2RR024994); he is currently supported by a BIRCWH KL2 career development award (5K12HD001459-13). He is also the section leader for a subproject of the CDC Prevention Epicenters Program Grant CU54 CK 000162; PI Fraser). In addition, JM is funded by the Barnes-Jewish Hospital Patient Safety and Quality Fellowship Program and received a joint research grant from the Barnes-Jewish Hospital Foundation and Washington University’s Institute of Clinical and Translational Sciences. JPH was supported by a Burroughs-Wellcome Career Award for Medical Scientists and R01DK099534.
Footnotes
Conflicts of interest: All authors report no conflicts of interest relevant to this article.
References
- 1.Weed L. Medical records that guide and teach. N Engl J Med. 1968;278:593–600. doi: 10.1056/NEJM196803142781105. [DOI] [PubMed] [Google Scholar]
- 2.Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Rothe E, et al. Systematic Review : Impact of Health Information Technology on Quality, Efficiency, and Costs of Medical Care. Ann Intern Med. 2006;144:742–752. doi: 10.7326/0003-4819-144-10-200605160-00125. [DOI] [PubMed] [Google Scholar]
- 3.Leal J, Laupland KB. Validity of electronic surveillance systems: a systematic review. J Hosp Infect. 2008;69(3):220–9. doi: 10.1016/j.jhin.2008.04.030. [DOI] [PubMed] [Google Scholar]
- 4.Lo Y-S, Lee W-S, Liu C-T. Utilization of electronic medical records to build a detection model for surveillance of healthcare-associated urinary tract infections. J Med Syst. 2013;37(2):9923. doi: 10.1007/s10916-012-9923-2. [DOI] [PubMed] [Google Scholar]
- 5.Landers T, Apte M, Hyman S, Furuya Y, Glied S, Larson E. A comparison of methods to detect urinary tract infections using electronic data. Jt Comm J Qual Patient Saf. 2010;36(9):411–417. doi: 10.1016/s1553-7250(10)36060-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Bent S, Simel DL, Fihn SD. Does This Woman Have an Acute Uncomplicated Urinary Tract Infection? JAMA. 2002;287(20):2701–2710. doi: 10.1001/jama.287.20.2701. [DOI] [PubMed] [Google Scholar]
- 7.Chen Y, Li H, Li Y, Xie D, Wang Z, Yang F, et al. Resemblance of symptoms for major depression assessed at interview versus from hospital record review. PloS one. 2012;7(1):e28734. doi: 10.1371/journal.pone.0028734. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Barbara AM, Loeb M, Dolovich L, Brazil K, Russell M. Agreement between self-report and medical records on signs and symptoms of respiratory illness. Prim Care Respir J. 2012;21(2):145–152. doi: 10.4104/pcrj.2011.00098. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Xu J, Schwartz K, Monsur J, Northrup J, Neale AV. Patient-clinician agreement on signs and symptoms of “strep throat”: a MetroNet study. Fam Prac. 2004;21(6):599–604. doi: 10.1093/fampra/cmh604. [DOI] [PubMed] [Google Scholar]
- 10.Pakhomov S, Jacobsen SJ, Chute CG, Roger VL. Agreement between Patient-reported Symptoms and their Documentation in the Medical Record. Am J Manag Care. 2008;14(8):530–539. [PMC free article] [PubMed] [Google Scholar]
- 11.DeVon AH, Ryan CJ, Zerwic JJ. Is the medical record an accurate reflection of patients’ symptoms during acute myocardial infarction? West J Nurs Res. 2004;26(5):547–60. doi: 10.1177/0193945904265452. [DOI] [PubMed] [Google Scholar]
- 12.Fromme EK, Eilers KM, Mori M, Hsieh Y-C, Beer TM. How accurate is clinician reporting of chemotherapy adverse effects? A comparison with patient-reported symptoms from the Quality-of-Life Questionnaire C30. J Clin Oncol. 2004;22(17):3485–90. doi: 10.1200/JCO.2004.03.025. [DOI] [PubMed] [Google Scholar]
- 13.Strömgren A, Groenvold M, Pedersen L, Olsen A, Spile M, Sjøgren P. Does the medical record cover the symptoms experienced by cancer patients receiving palliative care? A comparison of the record and patient self-rating. J Pain Symptom Manage. 2001;21(3):189–96. doi: 10.1016/s0885-3924(01)00264-0. [DOI] [PubMed] [Google Scholar]
- 14.Fanous AH, Amdur RL, O’Neill AF, Walsh D, Kendler KS. Concordance between chart review and structured interview assessments of schizophrenic symptoms. Compr Psychiatry. 2012;53(3):275–9. doi: 10.1016/j.comppsych.2011.04.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20(1):37–46. [Google Scholar]
- 16.Giesen LGM, Cousins G, Dimitrov BD, van de Laar FA, Fahey T. Predicting acute uncomplicated urinary tract infection in women: a systematic review of the diagnostic accuracy of symptoms and signs. BMC Fam Pract. 2010;11(1):78. doi: 10.1186/1471-2296-11-78. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Choudhuri JA, Pergamit RF, Chan JD, Schreuder A, McNamara E, Lynch J, et al. An electronic catheter-associated urinary tract infection surveillance tool. Infect Control Hosp Epidemiol. 2011;32(8):757–62. doi: 10.1086/661103. [DOI] [PubMed] [Google Scholar]
- 18.CDC. Device-associated Module. Atlanta, GA: CDC; 2014. Urinary Tract Infection (Catheter-Associated Urinary Tract Infection [CAUTI] and Non-Catheter-Associated Urinary Tract Infection [UTI]) and Other Urinary System Infection [USI]) Events; p. 7-1.p. 7-15. [Google Scholar]