Abstract
Hospital management and researchers are increasingly using electronic databases to study utilization, effectiveness, and outcomes of healthcare provision. Although several studies have examined the accuracy of electronic databases developed for general administrative purposes, few studies have examined electronic databases created to document the care provided by individual hospitals. In this study, we assessed the accuracy of an electronic database in a major teaching hospital in Eastern Province, Saudi Arabia, in documenting the 17 comorbidities constituting the Charlson index as recorded in paper charts by care providers. Using the hospital electronic database, the researchers randomly selected the data for 1,019 patients admitted to the hospital and compared the data for accuracy with the corresponding paper charts. Compared with the paper charts, the hospital electronic database did not differ significantly in prevalence for 9 conditions but differed from the paper charts for 8 conditions. The kappa (K) values of agreement ranged from a high of 0.91 to a low of 0.09. Of the 17 comorbidities, the electronic database had substantial or excellent agreement for 10 comorbidities relative to paper chart data, and only one showed poor agreement. Sensitivity ranged from a high of 100.0 percent to a low of 6.0 percent. Specificity for all comorbidities was greater than 93 percent. The results suggest that the hospital electronic database reasonably agrees with patient chart data and can have a role in healthcare planning and research. The analysis conducted in this study could be performed in individual institutions to assess the accuracy of an electronic database before deciding on its utility in planning or research.
Key words: accuracy, agreement, Charlson index, hospital electronic database, Saudi Arabia
Introduction
Traditionally, a manual detailed chart review is used to abstract data from medical records when patient information is needed after discharge. However, hospital electronic databases are being increasingly used by policy makers and planners to measure healthcare demands and needs and to plan provision of care. These databases are also being used by researchers to conduct studies on healthcare quality and outcomes and on utilization of healthcare services.1 Among the factors contributing to the increasing use of electronic databases are their easy availability for analysis, their relatively low cost, and the large quantities of clinical information they offer regarding care provided during patient contact with the healthcare system.2,3,4
The presence of comorbid conditions has a major influence on utilization and outcomes of care. As a result, researchers have developed scoring systems to account for the number of comorbidities, which can be used to adjust for the patient mix when measuring outcomes or care utilization.5,6 One of the most widely used and validated scoring systems was developed by Charlson and colleagues.7 This system was originally developed to predict one-year mortality in a cohort of medical patients while taking into consideration the number and severity of comorbid diseases.
National surveys in the United States have shown that the level of information technology (IT) adoption, including that of electronic medical records (EMRs), is still limited in most clinical settings.8 The Healthcare Information and Management Systems Society (HIMSS) analytic database indicated that about 65 percent of hospitals are at stage 3 of EMR implementation or below, and only 1.8 percent have adopted the complete use of EMRs.9 As a result, many hospitals are still using combined paper and EMR systems. Some information about patient care (such as laboratory orders and results, medications received, and procedures performed) is entered into the electronic system during the patient's hospital stay. Other information about the episode of care, such as details of patient diagnoses and comorbidities, which are usually coded using the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM)10 and now in some countries the International Classification of Diseases, Tenth Revision (ICD-10), is transferred from the paper chart to the electronic database following the patient's discharge.11, 12
With the wider use of electronic databases in healthcare research, the accuracy and completeness of such data have become an important issue given the potential for error in the process of coding the diagnoses for entry into these databases. Studies from North America, Europe, and Australia have investigated the accuracy of diagnostic coding of the Charlson comorbidity conditions in administrative databases compared with diagnoses obtained through paper medical records.13,14,15 However, studies from other countries or from different types of administrative databases are still lacking. As the quality of administrative data varies across hospitals, regions, and countries, more studies are needed from different countries to exchange information, to allow for the development of analytic tools that could be standardized and adopted across countries, and to help understand the strengths and weakness of various healthcare systems.16
In Saudi Arabia, despite the push by the government and a huge budget, Saudi hospitals have lagged behind their US counterparts in the use of EMR systems.17 On the other hand, in a process similar to the upcoming US transition to the International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) and the International Classification of Diseases, Tenth Revision, Procedure Coding System (ICD-10-PCS), Saudi Arabia is still transitioning to the use of ICD-10-CM/PCS; however, ICD-9-CM is still the predominant coding system in both countries. In the United States, the Department of Health and Human Services (HHS) recently announced a final rule that delays the required ICD-10-CM/PCS compliance date from October 1, 2013, to October 1, 2014.18
We conducted this study to assess the accuracy of the hospital electronic database in a university hospital in Saudi Arabia. To achieve that aim, we evaluated the extent to which ICD-9-CM diagnostic codes constituting the Charlson index used in the hospital electronic database accurately reflected the patients’ comorbid conditions as documented in the patients’ paper medical records.
Methods
Study Design
This was a cross-sectional study comparing the agreement of comorbidities for the same patient obtained from two data sources: paper medical charts and the hospital electronic database. These data sources contained records for all patients discharged from the hospital, with all diagnoses and procedures recorded for each patient. The study was approved and funded by the Deanship of Scientific Research at the University of Dammam.
Study Setting and Population
The study was conducted using the records of patients admitted from January 1, 2008, to December 31, 2010, to the Department of General Medicine at the teaching hospital affiliated with the University of Dammam in Eastern Province, Saudi Arabia. In 1998, the hospital introduced a QuadraMed system that is integrated to a clinical decision support system.19 Since the implementation of this system, all physicians have been required to enter medication orders and review lab results electronically. Some information is still being entered manually in paper charts.
Data Sources
We identified the electronic medical records of patients discharged from the Department of General Medicine during the three-year period from January 1, 2008, to December 31, 2010. To have at least 1,000 study participants for the final analysis, a random number generator was used to randomly select 1,050 patients from the electronic database to compensate for the anticipated unavailability of the paper charts for some of the patients. Information including patient demographic characteristics such as date of birth, nationality, discharge status including in-hospital death, date of admission, date of discharge, and all ICD-9-CM diagnoses and procedure codes was abstracted. Patients were excluded if their age was less than 18 years. Medical conditions and procedures were coded into the hospital electronic database using ICD-9-CM by experienced coders who read through the paper charts. The corresponding paper charts were requested from the medical record department of the hospital. All diagnoses in both the paper charts and the electronic database were abstracted and included in the analysis.
Chart Review
Two health information management graduates with experience in medical record review and data extraction were assigned to review the selected medical charts in their entirety. For each patient, the principal diagnosis and all 17 comorbidities included in the Charlson index (see Table 1) were searched for and collected from the chart. The reviewers abstracted information from the cover page, discharge summary, physician notes, consultation reports, laboratory results, and physician orders. Other patient information, such as demographic characteristics and occurrence of death during hospitalization, was also abstracted. To ensure consistency in the abstraction process, 20 charts other than those included in the final study analysis were cross-abstracted by both reviewers and were checked by the author for agreement (K = 0.82). The chart reviewers were blinded to the contents of the electronic database for the study patients.
Table 1.
Comorbidity | Codes |
---|---|
Myocardial infarction | 410.x, 412.x |
Congestive heart failure | 398.91, 402.01, 402.11, 402.91, 404.01, 404.03, 404.11, 404.13, 404.91, 404.93, 425.4–425.9, 428.x |
Peripheral vascular disease | 093.0, 437.3, 440.x, 441.x, 443.1–443.9, 447.1, 557.1, 557.9, V43.4 |
Cerebrovascular disease | 362.34, 430.x–438.x |
Dementia | 290.x, 294.1, 331.2 |
Chronic pulmonary disease | 416.8, 416.9, 490.x–505.x, 506.4, 508.1, 508.8 |
Hemiplegia/paraplegia | 334.1, 342.x, 343.x, 344.0–344.6, 344.9 |
Rheumatic disease | 446.5, 710.0–710.4, 714.0–714.2, 714.8, 725.x |
Peptic ulcer disease | 531.x–534.x |
Mild liver disease | 070.22, 070.23, 070.32, 070.33, 070.44, 070.54, 070.6, 070.9, 570.x, 571.x, 573.3, 573.4, 573.8, 573.9, V42.7 |
Diabetes without chronic complication | 250.0–250.3, 250.8, 250.9 |
Diabetes with chronic complication | 250.4–250.7 |
Renal disease | 403.01, 403.11, 403.91, 404.02, 404.03, 404.12, 404.13, 404.92, 404.93, 582.x, 583.0–583.7, 585.x, 586.x, 588.0, V42.0, V45.1, V56.x |
Any malignancy, including lymphoma and leukemia, except malignant neoplasm of skin | 140.x–172.x, 174.x–195.8, 200.x–208.x, 238.6 |
Moderate or severe liver disease | 456.0–456.2, 572.2–572.8 |
Metastatic solid tumor | 196.x–199.x |
AIDS/HIV | 042.x–044.x |
Source: Quan, H., V. Sundararajan, P. Halfon, et al. “Coding Algorithms for Defining Comorbidities in ICD-9-CM and ICD-10 Administrative Data.” Medical Care 43 (2005): 1130–39.
Statistical Analysis
Descriptive statistics were used to calculate the prevalence of 17 comorbidities in both the electronic database and the paper chart data, and the results were then compared using McNemar's test. To assess the accuracy of the electronic database in reproducing the chart data, sensitivity, specificity, positive predictive value, and negative predictive value were calculated using the chart data as the gold standard. Sensitivity was calculated as a measure of the accuracy of recording comorbidities in the electronic database when they were present in the paper chart. Specificity was calculated to determine the accuracy of reporting the absence of the condition in the electronic database when it was also absent from the paper chart. Positive predictive values and negative predictive values were also calculated to determine the extent to which a comorbidity present in or absent from the electronic database was also present in or absent from, respectively, the paper chart. Furthermore, to test the agreement between the two databases, we calculated kappa (K) statistics for individual comorbidities. To interpret the extent of agreement greater than chance, kappa values were categorized into five categories according to Landis and Koch's method:20 ≤0.20 (poor agreement), 0.21–0.40 (fair agreement), 0.41–0.60 (moderate agreement), 0.61–0.80 (substantial agreement), and 0.81–1.00 (excellent agreement). Analysis was conducted using Stata software (Stata Corporation, College Station, Texas). A p-value less than or equal to .05 was considered to be statistically significant.
Results
Of the 1,050 randomly selected charts, 31 were not available in the medical record department because they were of patients staying in the hospital at the time of request. The data for the remaining 1,019 patient paper charts were manually reviewed and successfully linked to the corresponding records in the electronic database and were included in the analysis of this study.
Table 2 shows the characteristics of the 1,019 study participants. The average patient age was 49.5 years, and about 64 percent were men. The majority of the patients were Saudi (81 percent), followed by non-Arab (10 percent), and Arab non-Saudi (9 percent). Five percent died during their hospital stay, 88 percent were discharged alive, and 7 percent were discharged against medical advice (DAMA). The highest percentage of patients (36 percent) had 5 to 10 comorbidities. Only 10 percent had more than 10 comorbidities.
Table 2.
Characteristic | Mean ± SD or No. (%) |
---|---|
Age, years | 49.5 ± 18.7 |
Sex | |
Male | 650 (63.8) |
Female | 369 (36.2) |
Nationality | |
Saudi | 826 (81.1) |
Arab non-Saudi | 87 (8.5) |
Non-Arab | 106 (10.4) |
Discharge type | |
Alive | 901 (88.4) |
In-hospital death | 52 (5.1) |
DAMA | 66 (6.5) |
Discharge year | |
2008 | 295 (29.0) |
2009 | 412 (40.4) |
2010 | 312 (30.6) |
Number of diseases | |
1–2 | 248 (24.4) |
3–4 | 299 (29.3) |
5–10 | 369 (36.2) |
>10 | 103 (10.1) |
Abbreviations: DAMA: discharged against medical advice; SD, standard deviation.
Table 3 shows the prevalence of the 17 comorbidities included in the Charlson index according to data source (electronic database and paper patient chart). The prevalence of 9 of the 17 comorbidities did not differ significantly between the two databases (p < .05). The electronic database underreported the prevalence for six conditions (myocardial infarction, 12.8 percent vs. 40.4 percent; hemiplegia/paraplegia, 1.5 percent vs. 5.2 percent; diabetes, 32.8 percent vs. 35.5 percent; diabetes with chronic complication, 7.3 percent vs. 19.3 percent; mild liver disease, 0.9 percent vs. 4.9 percent; and renal disease, 8.9 percent vs. 10.5 percent; p =< .01) and overreported the prevalence for two conditions (cerebrovascular disease, 14.3 percent vs. 12.0 percent; rheumatologic disease, 1.7 percent vs. 0.7 percent; p < .01).
Table 3.
Condition | Chart Data, No. (%) | Administrative Data, No. (%) | Difference (Chart Minus Administrative) | P-value |
---|---|---|---|---|
Myocardial infarction | 412 (40.4) | 130 (12.8) | 27.6 | <.001 |
Congestive heart failure | 65 (6.4) | 57 (5.6) | 0.8 | .32 |
Peripheral vascular disease | 8 (0.8) | 6 (0.6) | 0.2 | .48 |
Cerebrovascular disease | 122 (12.0) | 146 (14.3) | −2.3 | <.001 |
Dementia | 11 (1.1) | 10 (0.9) | 0.2 | .71 |
Chronic pulmonary disease | 127 (12.5) | 125 (12.3) | 0.2 | .77 |
Hemiplegia/paraplegia | 53 (5.2) | 15 (1.5) | 3.7 | <.001 |
Rheumatologic disease | 7 (0.7) | 17 (1.7) | −1.0 | <.001 |
Peptic ulcer disease | 30 (2.9) | 36 (3.5) | −0.6 | .22 |
Diabetes | 362 (35.5) | 334 (32.8) | 2.7 | .01 |
Diabetes with chronic complication | 197 (19.3) | 74 (7.3) | 12.0 | <.001 |
Mild liver disease | 50 (4.9) | 9 (0.9) | 4.0 | <.001 |
Moderate liver disease | 17 (1.7) | 19 (1.9) | −0.2 | .65 |
Renal disease | 107 (10.5) | 91 (8.9) | 1.6 | .02 |
Any malignancy | 46 (4.5) | 41 (4.0) | 0.5 | .35 |
Metastatic solid tumor | 12 (1.2) | 11 (1.1) | 0.1 | .56 |
AIDS/HIV | 6 (0.6) | 5 (0.5) | 0.1 | .32 |
Five quantitative indices to assess the extent of accuracy of the electronic database in reproducing the comorbidities included in the paper charts are presented in Table 4. The kappa value indicated excellent agreement (K = 0.81–1.00) between the electronic data and the paper chart for three conditions (cerebrovascular disease, metastatic solid tumor, and AIDS/HIV), substantial agreement (K = 0.61–0.80) for seven comorbidities, moderate agreement (K = 0.41–0.60) for three comorbidities, and fair agreement (K = 0.21–0.40) for three comorbidities. Only mild liver disease had poor agreement (K < 0.20). Sensitivity also varied according to the comorbidity, from a high of 100 percent for rheumatologic disease to a low of 6 percent for mild liver disease. Of the 17 comorbidities included in the Charlson index, six comorbidities had sensitivity above 80 percent as recorded in the electronic data (cerebrovascular disease, chronic pulmonary disease, rheumatologic diseases, diabetes, metastatic solid tumor, and AIDS/HIV). On the other hand, six comorbidities had a sensitivity of less than 50 percent (myocardial infarction, peripheral vascular disease, hemiplegia/paraplegia, diabetes with chronic complication, mild liver disease, and moderate liver disease). The specificity values for all 17 comorbidities were greater than 93.0 percent, indicating that the electronic database performed very accurately when the condition was not present in the paper chart.
Table 4.
Condition | Kappa Value | Sensitivity | Specificity | Positive Predictive Value | Negative Predictive Value |
---|---|---|---|---|---|
Myocardial infarction | 0.34 | 30.6 | 99.3 | 96.9 | 67.8 |
Congestive heart failure | 0.61 | 58.5 | 98.0 | 66.7 | 97.2 |
Peripheral vascular disease | 0.42 | 37.5 | 99.7 | 50.0 | 99.5 |
Cerebrovascular disease | 0.83 | 93.5 | 96.4 | 78.1 | 99.1 |
Dementia | 0.66 | 63.6 | 99.7 | 70.0 | 99.6 |
Chronic pulmonary disease | 0.78 | 80.3 | 97.4 | 81.6 | 97.2 |
Hemiplegia/paraplegia | 0.22 | 15.1 | 99.3 | 53.3 | 95.5 |
Rheumatologic disease | 0.58 | 100.0 | 99.0 | 41.2 | 100.0 |
Peptic ulcer disease | 0.62 | 70.0 | 98.5 | 58.3 | 99.1 |
Diabetes | 0.75 | 80.1 | 93.3 | 86.8 | 89.5 |
Diabetes with chronic complication | 0.34 | 27.9 | 97.7 | 74.3 | 85.0 |
Mild liver disease | 0.09 | 6.0 | 99.4 | 33.3 | 95.3 |
Moderate liver disease | 0.43 | 47.1 | 98.9 | 42.1 | 99.1 |
Renal disease | 0.75 | 72.0 | 98.5 | 84.6 | 96.8 |
Any malignancy | 0.77 | 73.9 | 99.3 | 82.9 | 98.8 |
Metastatic solid tumor | 0.87 | 83.3 | 99.9 | 90.9 | 99.8 |
AIDS/HIV | 0.91 | 83.3 | 100.0 | 100.0 | 99.9 |
Table 4 also presents positive predictive values and negative predictive values, which indicate the extent to which a comorbidity present in or absent from the electronic database was also present in or absent from, respectively, the paper chart. Positive predictive values were low (≤50 percent) for four comorbidities (peripheral vascular disease, rheumatologic diseases, mild liver disease, and moderate liver disease). On the other hand, the positive predictive values were 80 percent or greater for seven comorbidities (myocardial infarction, chronic pulmonary disease, diabetes, renal disease, any malignancy, metastatic solid tumor, and AIDS/HIV). All but one of the 17 conditions (myocardial infarction) had a high negative predictive value (≥85.0 percent), indicating that their absence in the electronic database also indicated their absence in the paper chart.
When we compared our study findings with those reported by Quan et al.21 and Kieszak et al.,22 we found that the kappa values for four conditions (myocardial infarction, hemiplegia/paraplegia, diabetes with chronic complication, and mild liver disease) were in higher kappa categories in the study by Quan et al. than in ours (see Table 5). On the other hand, the kappa values of five other conditions (peripheral vascular disease, cerebrovascular disease, dementia, renal disease, and AIDS/HIV) were in higher kappa categories in our study than in that of Quan et al. The kappa values calculated for all conditions in the study by Kieszak et al. were lower than the corresponding kappa values in our study, although three were in the same kappa value category as our study.
Table 5.
Kappa Values |
|||
---|---|---|---|
Condition | This Study | Quan Studya | Kieszak Studyb |
Myocardial infarction | 0.34 | 0.59 | 0.22 |
Congestive heart failure | 0.61 | 0.80 | 0.38 |
Peripheral vascular disease | 0.42 | 0.34 | 0.22 |
Cerebrovascular disease | 0.83 | 0.50 | – |
Dementia | 0.66 | 0.42 | 0.26 |
Chronic pulmonary disease | 0.78 | 0.72 | 0.64 |
Hemiplegia/paraplegia | 0.22 | 0.55 | – |
Rheumatologic disease | 0.58 | 0.57 | 0.20 |
Peptic ulcer disease | 0.62 | 0.63 | 0.12 |
Diabetes | 0.75 | 0.74 | 0.68 |
Diabetes with chronic complication | 0.34 | 0.58 | 0.16 |
Mild liver disease | 0.09 | 0.53 | – |
Moderate liver disease | 0.43 | 0.47 | – |
Renal disease | 0.75 | 0.49 | 0.29 |
Any malignancy | 0.77 | 0.78 | 0.23 |
Metastatic solid tumor | 0.87 | 0.87 | 0.20 |
AIDS/HIV | 0.91 | 0.78 | — |
Notes: Kappa categories: ≤0.2 (poor agreement); 0.21–0.40 (fair agreement); 0.41–0.60 (moderate agreement); 0.61–0.80 (substantial agreement); 0.81–1.00 (excellent agreement). Boldface values indicate kappa category higher than other studies.
Quan, H., G. A. Parsons, and W. A. Ghali. “Validity of Information on Comorbidity Derived from ICD-9-CM Administrative Data.” Medical Care 40 (2002): 675–85.
Kieszak, S. M., W. D. Flanders, A. S. Kosinski, C. C. Shipp, and H. Karp. “A Comparison of the Charlson Comorbidity Index Derived from Medical Record Data and Administrative Billing Data.” Journal of Clinical Epidemiology 52 (1999): 137–42; dash indicates conditions not studied.
Discussion
Our study examined the accuracy of a hospital electronic database in accurately capturing comorbidities documented in paper charts. Using the 17 comorbidities included in the Charlson index, we found that the overall accuracy of the electronic database was reasonably good. Although the electronic database reported a prevalence for the majority of the comorbidities included in the Charlson index that did not differ significantly from that reported by the paper charts, it tended to underreport the prevalence of the comorbidities when there was a discrepancy between the two data sources. Of the 17 comorbidities, the electronic database had substantial or excellent agreement for 10 conditions relative to paper chart data, and only one showed poor agreement.
Our study found that the difference in prevalence (whether higher or lower) was greater than 5 percent for only two conditions among the 17 included in the Charlson index when the hospital electronic database was compared with the corresponding paper charts. When the results of the accuracy of the 17 conditions from our study were compared with those in the study by Quan et al.,23 four kappa values reported by Quan et al. were in higher categories than in our study, and five were in higher categories in our study. These results suggest that our electronic database coding accuracy is probably similar to or may be better than the Canadian administrative data and more accurate than the US administrative data, as demonstrated by higher kappa value categories in our study in nine out of 12 conditions studied by Kieszak et al.24 These variations in accuracy between studies possibly reflect different types of administrative data. In our study, the data are created as part of the normal hospital operations and are intended for internal use by the hospital for quality assurance, utilization studies, and research purposes, in contrast to other studies’ use of claims data, which could contain information intended to maximize reimbursement.25 Other possible explanations include variation in the clarity and completeness of documentation by physicians and variation in the experience and knowledge of the coders. Explanations could also include the lack of standardized guidelines for the coding of comorbidities across institutions or across countries. Quan et al.26 suggested that the better accuracy for comorbidities in the Canadian administrative data compared with the US data in that study could be the result, at least partially, of having a medical chart coding department with a single coordinator who supervises coding practices. Interestingly, this management structure is the same as that used for the administration of medical chart coding in our study hospital. Having a dedicated department for coding probably creates an environment that changes the attitude toward documentation, allows for closer supervision and continued training,27 and ensures the hiring of qualified coders.
This study found that specificity was high (>93 percent) for all comorbidities, indicating that the electronic data did not include conditions that were not actually present in the patients as reflected in the paper charts. The low rate of false positives associated with high specificity may explain the generally high positive predictive value for most of the conditions except those with low prevalence rates.28
Similar to other studies, our study found low sensitivity for chronic diseases including past myocardial infarction, peripheral vascular disease, and hemiplegia/paraplegia. As suggested by others, physicians may overlook documenting patients’ established chronic conditions that do not require diagnostic investigations. As a result, coders may not code these conditions because coders tend to code only conditions that are clearly noted by physicians.29
Previous research has found that an increase in the number of codes on the discharge abstract has an inverse relation to coding completeness.30, 31 Our study supported the same finding: we found that a discrepancy between the electronic database and the paper chart was more likely with an increase in the number of comorbidities (data not shown). Having a higher number of comorbidities may lead coders to consider some to be of less importance to enter into the database.32
There are several potential limitations to this study. First, this study was conducted in a single university-affiliated hospital in Eastern Province, Saudi Arabia. Other studies have reported that the accuracy of administrative data varies between teaching and nonteaching hospitals.33, 34 Generalizability to other medical institutions or other countries might not apply. Second, having the study patients come from the Department of General Medicine may have reduced the applicability of the results of this study to other types of specialties. Other studies have found that documentation in paper charts depends on physician specialty and the type of medical condition.35 Third, this study was conducted on a hospital electronic database created for internal use by the hospital rather than for reimbursement claims; databases created for different purposes may vary in data quality.36 In addition, the chart coding used for this study was conducted in a medical chart coding department that uses standardized and professionalized coding rules and methods.37 The extent to which this organizational structure affects the quality of coding remains unknown. Fourth, we evaluated our database using ICD-9-CM despite its replacement by ICD-10 in several countries. Several studies have demonstrated that ICD-9-CM and ICD-10 administrative data were coded reasonably well and had similar validity in recording information on clinical conditions.38 Fifth, data abstraction by our research assistants was considered the gold standard in all validity analyses. We assumed that the research assistants were able to abstract all available information with complete accuracy. We also assumed that physicians were perfectly accurate in documenting patients’ histories and in making diagnoses. As a result, this study does not take into consideration the possibility of errors in physician documentation and errors in information abstraction by our researchers.
In summary, our study demonstrated that individual comorbidities in the electronic database of the University of Dammam teaching hospital coded according to ICD-9-CM are, on average, accurate for most but not all of the comorbidities in the Charlson index. Researchers and management can utilize the electronic database for research and general administrative purposes, although they must account for the degree of inaccuracy of some of the comorbidities.
Acknowledgment
Many thanks are due to Atheer Al-Saif and Lolwa Al-Mukhailid for abstracting data from the paper charts. We also would like to thank Mr. Mohamed H. Fahmy for his technical support with the electronic database. This study was supported by a grant from the Deanship of Research in the University of Dammam.
Contributor Information
Adel Youssef, Adel Youssef, MD, PhD, is an assistant professor in the Department of Health Information Management and Technology in the College of Applied Medical Sciences at the University of Dammam in Saudi Arabia..
Hana Alharthi, Hana Alharthi, PhD, is an assistant professor in the Department of Health Information Management and Technology College of Applied Medical Sciences at the University of Dammam Saudi Arabia..
Notes
- 1.Dean B. B., Lam J., Natoli J. L., Butler Q., Aguilar D., Nordyke R. J. Review: Use of Electronic Medical Records for Health Outcomes Research: A Literature Review. Medical Care Research and Review. 2009;66:611–38. doi: 10.1177/1077558709332440. [DOI] [PubMed] [Google Scholar]
- 2.Deyo R. A., Taylor V. M., Diehr P., et al. Analysis of Automated Administrative and Survey Databases to Study Patterns and Outcomes of Care. Spine. 1994;19:2083S–2091S. doi: 10.1097/00007632-199409151-00011. [DOI] [PubMed] [Google Scholar]
- 3.Nuttall M., van der Meulen J., Emberton M. Charlson Scores Based on ICD-10 Administrative Data Were Valid in Assessing Comorbidity in Patients Undergoing Urological Cancer Surgery. Journal of Clinical Epidemiology. 2006;59:265–73. doi: 10.1016/j.jclinepi.2005.07.015. [DOI] [PubMed] [Google Scholar]
- 4.Mitiku T. F., Tu K. Using Data from Electronic Medical Records: Theory versus Practice. Healthcare Quarterly. 2008;11:23–25. doi: 10.12927/hcq.2008.20088. [DOI] [PubMed] [Google Scholar]
- 5.McGregor J. C., Kim P. W., Perencevich E. N., et al. Utility of the Chronic Disease Score and Charlson Comorbidity Index as Comorbidity Measures for Use in Epidemiologic Studies of Antibiotic-Resistant Organisms. American Journal of Epidemiology. 2005;161:483–93. doi: 10.1093/aje/kwi068. [DOI] [PubMed] [Google Scholar]
- 6.Ghali W. A., Hall R. E., Rosen A. K., Ash A. S., Moskowitz M. A. Searching for an Improved Clinical Comorbidity Index for Use with ICD-9-CM Administrative Data. Journal of Clinical Epidemiology. 1996;49:273–78. doi: 10.1016/0895-4356(95)00564-1. [DOI] [PubMed] [Google Scholar]
- 7.Charlson M. E., Pompei P., Ales K. L., MacKenzie C. R. A New Method of Classifying Prognostic Comorbidity in Longitudinal Studies: Development and Validation. Journal of Chronic Diseases. 1987;40:373–83. doi: 10.1016/0021-9681(87)90171-8. [DOI] [PubMed] [Google Scholar]
- 8.Jha A. K., DesRoches C. M., Kralovec P. D., Joshi M. S. A Progress Report on Electronic Health Records in U.S. Hospitals. Health Affairs. 2010;29:1951–57. doi: 10.1377/hlthaff.2010.0502. [DOI] [PubMed] [Google Scholar]
- 9.Healthcare Information and Management Systems Society (HIMSS) HIMSS Analytics: EMR Adoption Model. 3rd quarter 2012. http://www.himssanalytics.org/hc_providers/emr_adoption.asp(accessed November 2012)
- 10.Deyo R. A., Cherkin D. C., Ciol M. A. Adapting a Clinical Comorbidity Index for Use with ICD-9-CM Administrative Databases. Journal of Clinical Epidemiology. 1992;45:613–19. doi: 10.1016/0895-4356(92)90133-8. [DOI] [PubMed] [Google Scholar]
- 11.Sundararajan V., Henderson T., Perry C., Muggivan A., Quan H., Ghali W. A. New ICD-10 Version of the Charlson Comorbidity Index Predicted In-Hospital Mortality. Journal of Clinical Epidemiology. 2004;57:1288–94. doi: 10.1016/j.jclinepi.2004.03.012. [DOI] [PubMed] [Google Scholar]
- 12.Quan H., Sundararajan V., Halfon P., et al. Coding Algorithms for Defining Comorbidities in ICD-9-CM and ICD-10 Administrative Data. Medical Care. 2005;43:1130–39. doi: 10.1097/01.mlr.0000182534.19832.83. [DOI] [PubMed] [Google Scholar]
- 13.Quan H., Parsons G. A., Ghali W. A. Validity of Information on Comorbidity Derived from ICD-9-CM Administrative Data. Medical Care. 2002;40:675–85. doi: 10.1097/00005650-200208000-00007. [DOI] [PubMed] [Google Scholar]
- 14.Henderson T., Shepheard J., Sundararajan V. Quality of Diagnosis and Procedure Coding in ICD-10 Administrative Data. Medical Care. 2006;44:1011–19. doi: 10.1097/01.mlr.0000228018.48783.34. [DOI] [PubMed] [Google Scholar]
- 15.Thygesen S. K., Christiansen C. F., Christensen S., Lash T. L., Sorensen H. T. The Predictive Value of ICD-10 Diagnostic Coding Used to Assess Charlson Comorbidity Index Conditions in the Population-based Danish National Registry of Patients. BMC Medical Research Methodology. 2011;11:83. doi: 10.1186/1471-2288-11-83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.De C. C., Quan H., Finlayson A., et al. Identifying Priorities in Methodological Research Using ICD-9-CM and ICD-10 Administrative Data: Report from an International Consortium. BMC Health Services Research. 2006;6:77. doi: 10.1186/1472-6963-6-77. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Almutairi M. S., Alseghayyir R. M., Al-Alshikh A. A., Arafah H. M., Househ M. S. Implementation of Computerized Physician Order Entry (CPOE) with Clinical Decision Support (CDS) Features in Riyadh Hospitals to Improve Quality of Information. Studies in Health Technology and Informatics. 2012;180:776–80. [PubMed] [Google Scholar]
- 18.Department of Health and Human Services “Administrative Simplification: Adoption of a Standard for a Unique Health Plan Identifier; Addition to the National Provider Identifier Requirements; and a Change to the Compliance Date for the International Classification of Diseases, 10th Edition (ICD-10-CM and ICD-10-PCS) Medical Data Code Sets; Final Rule.” 45 CFR Part 162. Federal Register. September 5, 2012;77(172):54664–720. Available at http://www.gpo.gov/fdsys/pkg/FR-2012-09-05/pdf/2012-21238.pdf. [PubMed] [Google Scholar]
- 19.QuadraMed “QuadraMed® Provides Healthcare IT and Services That Transform Quality Care into Financial Health.”. 2012. Available at http://www.quadramed.com/ (accessed October 1, 2012).
- 20.Landis J. R., Koch G. G. The Measurement of Observer Agreement for Categorical Data. Biometrics. 1977;33:159–74. [PubMed] [Google Scholar]
- 21.Quan H., Parsons G. A., Ghali W. A. “Validity of Information on Comorbidity Derived from ICD-9-CM Administrative Data.”. [DOI] [PubMed]
- 22.Kieszak S. M., Flanders W. D., Kosinski A. S., Shipp C. C., Karp H. A Comparison of the Charlson Comorbidity Index Derived from Medical Record Data and Administrative Billing Data. Journal of Clinical Epidemiology. 1999;52:137–42. doi: 10.1016/s0895-4356(98)00154-1. [DOI] [PubMed] [Google Scholar]
- 23.Quan H., Parsons G. A., Ghali W. A. “Validity of Information on Comorbidity Derived from ICD-9-CM Administrative Data.”. [DOI] [PubMed]
- 24.Kieszak S. M., Flanders W. D., Kosinski A. S., Shipp C. C., Karp H. “A Comparison of the Charlson Comorbidity Index Derived from Medical Record Data and Administrative Billing Data.”. [DOI] [PubMed]
- 25.Lash T. L., Mor V., Wieland D., Ferrucci L., Satariano W., Silliman R. A. Methodology, Design, and Analytic Techniques to Address Measurement of Comorbid Disease. Journals of Gerontology Series A: Biological Sciences and Medical Sciences. 2007;62:281–85. doi: 10.1093/gerona/62.3.281. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Quan H., Parsons G. A., Ghali W. A. “Validity of Information on Comorbidity Derived from ICD-9-CM Administrative Data.”. [DOI] [PubMed]
- 27.Hassey A., Gerrett D., Wilson A. A Survey of Validity and Utility of Electronic Patient Records in a General Practice. BMJ. 2001;322:1401–5. doi: 10.1136/bmj.322.7299.1401. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Brenner H., Gefeller O. Variation of Sensitivity, Specificity, Likelihood Ratios and Predictive Values with Disease Prevalence. Statistics in Medicine. 1997;16:981–91. doi: 10.1002/(sici)1097-0258(19970515)16:9<981::aid-sim510>3.0.co;2-n. [DOI] [PubMed] [Google Scholar]
- 29.Chong W. F., Ding Y. Y., Heng B. H. A Comparison of Comorbidities Obtained from Hospital Administrative Data and Medical Charts in Older Patients with Pneumonia. BMC Health Services Research. 2011;11:105. doi: 10.1186/1472-6963-11-105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Powell H., Lim L. L., Heller R. F. Accuracy of Administrative Data to Assess Comorbidity in Patients with Heart Disease: An Australian Perspective. Journal of Clinical Epidemiology. 2001;54:687–93. doi: 10.1016/s0895-4356(00)00364-4. [DOI] [PubMed] [Google Scholar]
- 31.Iezzoni L. I., Foley S. M., Daley J., Hughes J., Fisher E. S., Heeren T. Comorbidities, Complications, and Coding Bias: Does the Number of Diagnosis Codes Matter in Predicting In-Hospital Mortality? JAMA. 1992;267:2197–2203. doi: 10.1001/jama.267.16.2197. [DOI] [PubMed] [Google Scholar]
- 32.Chong W. F., Ding Y. Y., Heng B. H. “A Comparison of Comorbidities Obtained from Hospital Administrative Data and Medical Charts in Older Patients with Pneumonia.”. [DOI] [PMC free article] [PubMed]
- 33.Iezzoni L. I., Shwartz M., Moskowitz M. A., Ash A. S., Sawitz E., Burnside S. Illness Severity and Costs of Admissions at Teaching and Nonteaching Hospitals. JAMA. 1990;264:1426–31. [PubMed] [Google Scholar]
- 34.Iezzoni L. I., Burnside S., Sickles L., Moskowitz M. A., Sawitz E., Levine P. A. Coding of Acute Myocardial Infarction: Clinical and Policy Implications. Annals of Internal Medicine. 1988;109:745–51. doi: 10.7326/0003-4819-109-9-745. [DOI] [PubMed] [Google Scholar]
- 35.Khan N. F., Harrison S. E., Rose P. W. Validity of Diagnostic Coding within the General Practice Research Database: A Systematic Review. British Journal of General Practice. 2010;60:e128–e136. doi: 10.3399/bjgp10X483562. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Lash T. L., Mor V., Wieland D., Ferrucci L., Satariano W., Silliman R. A. “Methodology, Design, and Analytic Techniques to Address Measurement of Comorbid Disease.”. [DOI] [PMC free article] [PubMed]
- 37.Luthi J. C., Troillet N., Eisenring M. C., et al. Administrative Data Outperformed Single-Day Chart Review for Comorbidity Measure. International Journal for Quality in Health Care. 2007;19:225–31. doi: 10.1093/intqhc/mzm017. [DOI] [PubMed] [Google Scholar]
- 38.Quan H., Li B., Saunders L. D., et al. Assessing Validity of ICD-9-CM and ICD-10 Administrative Data in Recording Clinical Conditions in a Unique Dually Coded Database. Health Services Research. 2008;43:1424–41. doi: 10.1111/j.1475-6773.2007.00822.x. [DOI] [PMC free article] [PubMed] [Google Scholar]