Abstract
The patient problem list, like administrative claims data, has become an important source of data for decision support, patient cohort identification, and alerting systems. A two-fold intervention to increase capture of problems on the problem list automatically – with minimal disruption to admitting and provider billing workflows – is described. For new patients with no prior data in the electronic health record, the intervention resulted in a statistically significant increase in the number of problems recorded to the problem list (3.8 vs 2.9 problems post-and pre-intervention respectively, p value 2×10−16). The majority of problems were recorded in the first 24 hours of admission. The proportion of patients with at least one problem coded to the problem list within the first 24 hours increased from 94% to 98% before and after intervention (chi square 344, p value 2×10−16). ICD9 “V codes” connoting circumstances beyond disease were captured at a higher rate post intervention than before. Deyo/Charlson comorbidities derived from problem list data were more similar to those derived from claims data after the intervention than before (Jaccard similarity 0.3 post- vs 0.21 pre-intervention, p value 2×10−16). A workflow-sensitive, non-interruptive means of capturing provider-entered codes early in admission can improve both the quantity and content of problems on the patient problem list.
Introduction
Administrative claims remain the bulwark of medical billing, and they are also frequent elements of decision support systems including clinical alerts, comorbidity capture, and predictive models. At the same time, the patient problem list has evolved from the “Problem-Oriented Medical Record” defined by Dr. Lawrence Weed in 1968 to an area of research and application of clinical informatics.(1, 2) Both administrative claims abstracted by billers and patient problems derived from provider documentation or provider-entered codes may fall into similar classification schema like the International Classification of Diseases (ICD).(3) But while the literature on claims and on the problem list has expanded markedly since the 1990s, studies evaluating the intersection between these codes are less common.
Because of their ubiquity and classification standards primarily through ICD9/10, administrative claims and their secondary use touch on domains across quality, patient safety, decision support, prediction, personalized medicine, and more. A review of all of these applications would be exhaustive. A cogent example exists in the interplay of diagnostic and pharmacy claims data on medication management.(4–6) A Dutch study in 2013 demonstrated up to 38% of drug therapy alerts failed to appear because of missing information in the electronic patient record; of the 442 records considered, disease information was missing in 83%.(7) A systematic review outlined statistically significant reductions in medication errors in patients with renal insufficiency and in pregnant patients in studies of alerting systems in the electronic medical record.(8)
Biller-assigned administrative claims do come with their own limitations and biases. From predicting mortality to identifying complications particularly in work led by Iezzoni, administrative claims alone may be insufficient data sources for particular tasks.(9–11) Code “creep” – overbilling for more codes than are supportable by documentation – is well-described.(12–14) However, there remains another critical limitation of systems relying on administrative claims; these claims are not coded until after a patient has been discharged and therefore are not available to any of the panoply of systems waiting to use them until days post discharge.
The patient problem list offers some of the advantages of administrative claims – structured data, easily integrated into decision support or quality reporting. Indeed, a coded problem list is a core objective of Meaningful Use, Stage I.(15, 16) A small amount of research has linked problem lists to higher quality care such as increased rates of appropriate prescription of ACE inhibitors or Angiotensin Receptor Blockers for patients with more accurate problem lists; similarly, adding chronic health concerns like obesity to the problem list increase rates of providers addressing these problems with patients.(17, 18) A number of studies since the 1990s have outlined approaches to maximize the accuracy, completeness, and ease of populating problem lists through methods as varied as direct provider documentation of problems, natural language processing, inference rules, and wikis.(19–29) Some of these approaches are computationally intensive, and others may alter workflows.
Study Aims
The aim of this study is to evaluate a two-fold intervention built into existing provider workflows to increase documentation of problems on patient problem lists. One intervention is the conversion of the “Admitting Diagnosis” Field in the Admit Patient Order Set from a free text field to a structured data entry field using a diagnostic synonym lookup table. The second intervention is the alignment of a daily provider workflow – Evaluation/Management Billing (E/M) by providers on their own documentation – with the population of the patient problem list using a tool locally known as iCharge.
Methods
Admitting Diagnosis Problem List Intervention
The initial order set for every patient admitted to the medical center starts with the Admit Patient Order in the inpatient electronic medical record, Allscripts Sunrise Clinical Manager. The admitting diagnosis must be completed, and until fall 2013, this field was a free text entry field that was pre-filled with free text data obtained through the Admit/Discharge/Transfer Registration System. In November 2013, the field was converted into a structured order entry field that asked providers to continue inputting an admitting diagnosis but provided a means to enter a diagnosis that correlated with an ICD9 code on the patient problem list.(Figure 1) A synonym lookup table provided by Intelligent Medical Objects (IMO), a third party interface terminology,(32) permitted the entry of common abbreviations and natural language to obtain a structured diagnosis for the reason for admission. The order set was not changed in any other way and minimal education around this intervention was required.
Figure 1.
Activation of the custom Problem List Tool (bottom) via the Admit Order Admit Diagnosis Field (top)
In activating the problem list tool, providers were also able to enter secondary diagnoses at the same time. These diagnoses were categorized as acute, chronic, and prior, to reflect common clinical categorizations of a patient’s past medical history. The problem list tool was also available at the time of inpatient documentation entry and permitted pasting already coded diagnoses into notes directly under “Past Medical History” (Figure 2).
Figure 2.
Problem List Integration into Inpatient Notes. Clicking the arrow (circled in red) pastes the diagnoses into the free text note shown on the left
iCharge – Provider E/M Billing Tool with Transparent Connection to the Patient Problem List
In January 2011, the iCharge tool was implemented into provider billing workflows. The prior billing process incorporated paper billing slips submitted by providers after they’d completed requisite inpatient documentation. iCharge converted this process into an electronic “widget” that was readily accessible in the electronic health record. The iCharge workflow started with the provider selecting the inpatient note for which a bill was to be generated. A link in the iCharge tool opened the note itself for providers to review details and to verify the correct note would generate the bill. iCharge was applicable to initial and follow-up provider documentation throughout a patient’s admission.
Once a bill and an encounter had been selected, iCharge directed providers to link at least one coded diagnosis to the encounter. From 2011 through November 2013, iCharge permitted but did not require the storage of coded diagnoses to patient problem lists. An option was provided to save coded diagnoses for an encounter back to the problem list, known as “Health Issues”.
On November 7, 2013, iCharge was upgraded with a core functionality change – the automatic addition of coded diagnoses to the patient’s longitudinal problem list – this will be referred to as “auto-save” for purposes of this work. The option for providers to manually save coded diagnoses back to the problem list was removed but the remainder of iCharge functionality, while enhanced incrementally, was intact.
Throughout the intervention time period, traditional administrative medical billing was conducted by the billing department of New York Presbyterian Hospital. The billing codes assigned to each inpatient admission post discharged were stored in the clinical data warehouse for the hospital.
Data Collection
All health issues/problem list data were extracted from problem list tables in the clinical data warehouse. Administrative claims data assigned to inpatient admissions from January 1, 2011, to March 1, 2014, were also extracted for comparison to the problem list. The admissions occurred at Columbia University Medical Center in New York, NY. Admissions to affiliate hospitals or to other hospitals in New York were not included.
In addition to the Problem List Tool and iCharge, it is possible for providers to enter problem list codes into the problem list via “Health Issues”. This step is an additional and rare action in the provider clinical workflow, although precise frequency data for this means of problem entry was not separable from the main data sources in this study.
Evaluation Methods: Change in Problem List Codes Before and After Intervention
The evaluation of the problem list interventions – 1) Admitting Problem List Tool; 2) iCharge and Problem List Integration via auto-save – is based in a comparison of the quality and quantity of problem list codes before and after these interventions. Problem List auto-save was activated on November 7, 2013. The four months prior to auto-save activation was compared to the period following its activation. Attributes of problem list entry were collected including provider type, the time of data entry, and the coded problems themselves.
An important subgroup of this analysis remains patients who have never been encountered in the medical center before. While these patients will not have administrative claims assigned until after they are discharged home, they will have problems assigned to the problem list via the Admitting Diagnosis, iCharge, and the custom Problem List tool.
One subset of ICD9 coding, “V Codes” (V00-V91), correspond to diagnostic circumstances beyond disease definitions. These codes range from “Normal Pregnancy, V22”, to “Artificial Opening Status, V44” (gastrotomy tubes, etc) to “Housing, household, and economic circumstances, V60”. Factors of mental illness such as suicidal or homocidal ideation are similarly captured by these codes. While not a complete surrogate to psychosocial determinants of health, V codes do capture aspects of a patients’ clinical history that are not well-defined in other areas of the ICD9 schema. The number and type of V Codes entered via the problem list interventions were recorded and compared.
Evaluation Methods: Comparison of Problem List to Administrative Claims
Problem list data entered by providers was compared to administrative claims data entered by medical billers using the Charlson/Deyo Comorbidity Index.(30, 31) The Charlson/Deyo comorbidity index aggregates ICD9 coding into clinical categories.(31) ICD9 codes 250-250.3 and 250.7, for example, are aggregated into “Diabetes” while ICD9 codes 250.4-250.6 are aggregated into “Diabetes with Chronic Complications.” Both ICD9s derived from the problem list over the course of an admission and those derived from administrative claims data after discharge were aggregated according to Deyo/Charlson comorbiditiy indices. While there is no gold standard indicating how similar problems on the problem list should actually be, it is possible to compare the similarity of the vectors of Deyo/Charlson comorbidity indices between the problem list and administrative claims.
To compare these categorical binary vectors (the presence or absence of ICD9 codes corresponding to the Deyo/Charlson comorbidities without weighting), the metric of Jaccard similarity was used.(36) In the Jaccard index of similarity, a typical approach for two vectors, A and B, is:
where M11 is the number of attributes present in both A and B, M10 is the number of attributes present only in A, and M01 is the number of attributes present only in B. A higher value of J corresponds to greater similarity between A and B.
Data was processed and analyzed in the R statistical environment using packages, “plyr” and “data.table”.(33–35)
This study was approved by the Institutional Review Board.
Results
From January 1, 2010, through March 1, 2014, there were 271,615 inpatient admissions corresponding to 140,538 unique medical record numbers. There was an average of 5,400 inpatient admissions per month. In any given month, an average of 2,800 new medical record numbers were assigned corresponding to patients who have not had prior encounters in the medical center and to a very small number that are inadvertently given a second medical record number at registration on a subsequent admission.
Administrative Claims Data
There were 2,785,658 ICD9 codes assigned to the entire study cohort since 2010. There were an average of ten ICD9 codes assigned to each admission. The number of ICD9 codes assigned ranges from a minimum of 1 code to a maximum of 51.
Problem List auto-save was activated on November 7, 2013. The four months prior to auto-save activation was compared to the period following its activation. With respect to medical billing assigned codes irrespective of the problem list, there was a decrease in the average number of ICD9 coded by medical billers in period after auto-save compared to the period before (10.8 ICD9 codes assigned prior to auto-save compared to 9 codes on average after, p value 2×10−16).
Administrative claims are linked to inpatient admissions in the clinical data warehouse, but they are not assigned until patients have been discharged from the hospital.
Problem List Data
Over the entire study period, there were 319,420 problems entered onto patient problem lists. All providers either completing billing through iCharge (physicians) or entering admission orders through the electronic health record could impact the problem list. There was also an option to bring up the problem list independently in the electronic health record and any member of the care team could do so. Table 1 outlines the usage of the problem list by provider type over the study time period.
Table 1.
Problem List Entry Count by Provider Type
| Provider Type | Number of Health Issues Entered | Percentage |
|---|---|---|
| Physicians – attendings, fellows | 270,958 | 84.8% |
| Physicians – residents | 19,797 | 6.2% |
| Physician Assistants | 15,012 | 4.7% |
| Nurse Practitioners | 5,719 | 1.8% |
| Medical Students | 496 | 0.2% |
| Physical or Occupational Therapists | 1,308 | 0.4% |
| Nurses | 159 | 0.04% |
| Other or Not Recorded | 5,971 | 1.8% |
The timing of entry of health issues was considered in aggregate and specifically for the period before and after the implementation of Problem List auto-save. The vast majority of problems were coded to the problem list at the time immediately before or after the admission event.
A histogram illustrating the timing of problem list diagnosis entry in the first 24 hours of admission is shown with a comparison between the four months preceding and the four months following the activation of Problem List autosave (Figure 4).
Figure 4.

Histogram comparing problem list entry in the first twenty-four hours of admission, comparison between the four months preceding and following the activation of Problem List auto-save
The highest frequency of problem list entry events occurred in the time before patients were officially registered as admitted; that registration time correlates with the arrival of patients on the medical ward. In the time before this registration, providers are preparing for patients’ arrival on the wards by entering pending admission orders, writing admission notes, and billing on admission documentation. The increase in coded problems after the activation of auto-save is apparent in the plot.
The proportion of patients with at least one problem in the problem list within 24 hours of admission increased from 94% before auto-save to 98% after auto-save activation (chi square 344, p value 2×10−16).
To assess whether the increased frequency of problem list entry events was sustained throughout admission, all problem list entry timestamps were normalized to length of stay for their respective admissions. The following histogram illustrates that the impact of health issue auto-save is maintained throughout the admission via subsequent billing and documentation events and not solely on the day of admission itself (Figure 5).
Figure 5.

Histogram showing time of problem list diagnosis entry normalized to percentage of length of stay (LOS) – time of admission is 0% and time of discharge is 100% or 1.
Change in problem list entry for patients never before admitted to the medical center
The number of problems on the problem list for new patients increased to 3.8 problems from 2.9 problems on the problem list in the period after activation of auto-save compared to before; this result was statistically significant, p value 2×10−16. A similar result was seen for all patients in the study period, but particular emphasis is placed on those patients who would not have had problems in the problem list or prior administrative claims.
Content of problem list codes
In the period following the activation of Problem List auto-save, there was a marked increase in the number of V codes being captured (Figure 6). Some of the largest differences are seen in V45 and V42 (corresponding to organ transplantation), V30 (single liveborn), V27 (outcome of delivery), V22 (normal pregnancy). The quantity of V code counts before and after auto-save activation were different (chi square 920, p value 0.002).
Figure 6.
Barplot showing frequency of ICD9 “V code” entry before and after auto-save
Comparison of Administrative Claims Data Compared to Problem List
Jaccard similarity was calculated for Deyo/Charlson comorbidities derived from claims compared to those derived from the problem list before and after auto-save. The similarity of Deyo/Charlson comorbidities was higher for problems entered after auto-save activation compared to the period before, Jaccard similarity of 0.3 compared to 0.21, respectively, p value 2×10−16.
Discussion
The primary findings of this work demonstrate that interventions to automatically populate problem lists through extant admitting and billing workflows can increase the number of problems codified to those lists. This effect was achieved without altering or interrupting provider workflows. The majority of problems coded through this method are obtained at the time of admission, not the time of discharge, making them available for use while the patient is still admitted to the hospital. A number of decision support tools, such as drug-condition alerts, are enabled by this finding. Another important finding relates to the content of automatically populated problems on the problem list. V codes, a class of ICD9 codes connoting social, mental health, and even economic factors in a patient’s clinical history, were captured at a higher rate through Problem List auto-save. Finally, a comorbidity index was applied to both administrative claims and to problem list ICD9 codes. While similarity was low overall, it increased between claims and problem list codes with the introduction of auto-save.
This study extends the work of others by demonstrating the impact of problem list interventions that are integrated mid-stream with existing provider workflows and without development of or validation of methods in natural language processing or machine learning. Collecting problem list entries through admit orders and through billing workflows also permitted other members of the care team to contribute to the problem list itself. While physicians contributed the vast majority of problems, the potential remains for a multi-disciplinary problem list incorporating biopsychosocial determinants of health. The effect on V codes of this intervention is one small step in this direction.
Limitations of this study include the length of time that auto-save was active compared to the overall study period. Generalizability is limited by differences in clinical and billing workflows across different sites. Deyo/Charlson comorbidity indices were not validated on problem list data, so the results of similarity measurements should be considered in this light. The effect of concomitant educational interventions within departments or specific units were not recorded and may confound these results; the relatively short length of time comparing the periods before and after auto-save have the secondary benefit of minimizing such effects. Frequency of problem entry via Health Issues, outside of the iCharge and admitting order workflows, could not be measured precisely in this study although the overall contribution was felt to be small. It is also worth noting that the ability to enter problems via Health Issues was not modified during the study time period so any baseline rate of use for some providers would be reflected in the data for the pre- and post-intervention periods.
Future research should compare the integration of problem list data along with and in place of administrative claims data. Predictive models built on problem list data, for example, would need to be validated separately and not simply used in place of claims data. The underlying workflows behind administrative claims and problems lists are not the same. The effect of Problem List auto-save should be evaluated to determine if it is sustained over time. Also, the impact of problem list evolution over the course of an admission and over the course of a patient’s longitudinal care can be examined in subsequent work. Finally, as problem list capture improves, issues seen in administrative claims could also arise. “Problem creep” or “problem clutter” may be unintended consequences of these efforts.
Conclusion
A workflow-sensitive, two-fold intervention to increase capture of the patient problem list resulted in a statistically significant increase in recorded problems. These problems were recorded even before patients have left the emergency room to arrive on the hospital ward. Decision support integrating coded diagnostic data is enabled early in admission for all patients, not only those who have been admitted before. Problems recorded in this way improved in both quantity and content.
References
- 1.Weed LL. Medical records that guide and teach. The New England journal of medicine. 1968;278:593–600. doi: 10.1056/NEJM196803142781105. [DOI] [PubMed] [Google Scholar]
- 2.Weed LL. The problem oriented record as a basic tool in medical education, patient care and clinical research. Annals of clinical research. 1971;3:131–4. [PubMed] [Google Scholar]
- 3.(WHO) WHO International Classification of Diseases (ICD) 2014. [cited 2014 3/1/2014]. Available from: http://www.who.int/classifications/icd/en/
- 4.Gonzalez CJ, Rivera CA, Martin RJ, Mergian GA, Cruz H, Agins BD. Using computer-based monitoring and intervention to prevent harmful combinations of antiretroviral drugs in the New York State AIDS Drug Assistance Program. Joint Commission journal on quality and patient safety / Joint Commission Resources. 2012;38(6):269–76. doi: 10.1016/s1553-7250(12)38034-3. [DOI] [PubMed] [Google Scholar]
- 5.Stockl KM, Le L, Harada AS, Zhang S. Use of controller medications in patients initiated on a long-acting beta2-adrenergic agonist before and after safety alerts. American journal of health-system pharmacy : AJHP : official journal of the American Society of Health-System Pharmacists. 2008;65(16):1533–8. doi: 10.2146/ajhp070685. [DOI] [PubMed] [Google Scholar]
- 6.Noirot LA, Reichley R, Dunagan WC, Bailey TC. Using outpatient prescription claims to evaluate medication adherence in an acute myocardial infarction population. AMIA Annual Symposium proceedings /AMIA Symposium AMIA Symposium. 2005:1062. [PMC free article] [PubMed] [Google Scholar]
- 7.Floor-Schreudering A, Heringa M, Buurma H, Bouvy ML, De Smet PA. Missed drug therapy alerts as a consequence of incomplete electronic patient records in Dutch community pharmacies. The Annals of pharmacotherapy. 2013;47(10):1272–9. doi: 10.1177/1060028013501992. [DOI] [PubMed] [Google Scholar]
- 8.Ojeleye O, Avery A, Gupta V, Boyd M. The evidence for the effectiveness of safety alerts in electronic patient medication record systems at the point of pharmacy order entry: a systematic review. BMC medical informatics and decision making. 2013;13:69. doi: 10.1186/1472-6947-13-69. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.McCarthy EP, Iezzoni LI, Davis RB, Palmer RH, Cahalane M, Hamel MB, et al. Does clinical evidence support ICD-9-CM diagnosis coding of complications? Medical care. 2000;38(8):868–76. doi: 10.1097/00005650-200008000-00010. [DOI] [PubMed] [Google Scholar]
- 10.Weingart SN, Iezzoni LI, Davis RB, Palmer RH, Cahalane M, Hamel MB, et al. Use of administrative data to find substandard care: validation of the complications screening program. Medical care. 2000;38(8):796–806. doi: 10.1097/00005650-200008000-00004. [DOI] [PubMed] [Google Scholar]
- 11.Iezzoni LI, Daley J, Heeren T, Foley SM, Hughes JS, Fisher ES, et al. Using administrative data to screen hospitals for high complication rates. Inquiry : a journal of medical care organization, provision and financing. 1994;31(1):40–55. [PubMed] [Google Scholar]
- 12.Carter GM, Newhouse JP, Relles DA. How much change in the case mix index is DRG creep? Journal of health economics. 1990;9(4):411–28. doi: 10.1016/0167-6296(90)90003-l. [DOI] [PubMed] [Google Scholar]
- 13.Seiber EE. Physician code creep: evidence in Medicaid and State Employee Health Insurance billing. Health care financing review. 2007;28(4):83–93. [PMC free article] [PubMed] [Google Scholar]
- 14.Hsia DC, Krushat WM, Fagan AB, Tebbutt JA, Kusserow RP. Accuracy of diagnostic coding for Medicare patients under the prospective-payment system. The New England journal of medicine. 1988;318(6):352–5. doi: 10.1056/NEJM198802113180604. [DOI] [PubMed] [Google Scholar]
- 15.Services C-CfMaM Meaningful Use. 2014. Available from: http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Meaningful_Use.html.
- 16.Holmes C. The problem list beyond meaningful use. Part I: The problems with problem lists. Journal of AHIMA / American Health Information Management Association. 2011;82(2):30–3. quiz 4. [PubMed] [Google Scholar]
- 17.Hartung DM, Hunt J, Siemienczuk J, Miller H, Touchette DR. Clinical implications of an accurate problem list on heart failure treatment. Journal of general internal medicine. 2005;20(2):143–7. doi: 10.1111/j.1525-1497.2005.40206.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Banerjee ES, Gambler A, Fogleman C. Adding obesity to the problem list increases the rate of providers addressing obesity. Family medicine. 2013;45:629–33. [PubMed] [Google Scholar]
- 19.Warren JJ, Collins J, Sorrentino C, Campbell JR. Just-in-time coding of the problem list in a clinical environment. Proceedings / AMIA Annual Symposium AMIA Symposium. 1998:280–4. [PMC free article] [PubMed] [Google Scholar]
- 20.Meystre S, Haug PJ. Automation of a problem list using natural language processing. BMC medical informatics and decision making. 2005;5:30. doi: 10.1186/1472-6947-5-30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Meystre SM, Haug PJ. Comparing natural language processing tools to extract medical problems from narrative text. AMIA Annual Symposium proceedings / AMIA Symposium AMIA Symposium. 2005:525–9. [PMC free article] [PubMed] [Google Scholar]
- 22.Meystre S, Haug PJ. Natural language processing to extract medical problems from electronic clinical documents: performance evaluation. Journal of biomedical informatics. 2006;39:589–99. doi: 10.1016/j.jbi.2005.11.004. [DOI] [PubMed] [Google Scholar]
- 23.Chen ES, Wright A, Maloney FL, Van Putten C, Paterno MD, Goldberg HS. Enhancing clinical problem lists through data mining and natural language processing. AMIA Annual Symposium proceedings /AMIA Symposium AMIA Symposium. 2008:901. [PubMed] [Google Scholar]
- 24.Solti I, Aaronson B, Fletcher G, Solti M, Gennari JH, Cooper M, et al. Building an automated problem list based on natural language processing: lessons learned in the early phase of development. AMIA Annual Symposium proceedings / AMIA Symposium AMIA Symposium. 2008:687–91. [PMC free article] [PubMed] [Google Scholar]
- 25.Wright A, Chen ES, Maloney FL. An automated technique for identifying associations between medications, laboratory results and problems. Journal of biomedical informatics. 2010;43:891–901. doi: 10.1016/j.jbi.2010.09.009. [DOI] [PubMed] [Google Scholar]
- 26.Pacheco JA, Thompson W, Kho A. Automatically detecting problem list omissions of type 2 diabetes cases using electronic medical records. AMIA Annual Symposium proceedings / AMIA Symposium AMIA Symposium. 2011;2011:1062–9. [PMC free article] [PubMed] [Google Scholar]
- 27.Wright A, Pang J, Feblowitz JC, Maloney FL, Wilcox AR, McLoughlin KS, et al. Improving completeness of electronic problem lists through clinical decision support: a randomized, controlled trial. Journal of the American Medical Informatics Association : JAMIA. 2012;19:555–61. doi: 10.1136/amiajnl-2011-000521. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Mehta N, Vakharia N, Wright A. EHRs in a Web 2.0 World: Time to Embrace a Problem-List Wiki. Journal of general internal medicine. 2013;29:434–6. doi: 10.1007/s11606-013-2652-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Mowery DL, Jordan P, Wiebe J, Harkema H, Dowling J, Chapman WW. Semantic annotation of clinical events for generating a problem list. AMIA Annual Symposium proceedings / AMIA Symposium AMIA Symposium. 2013;2013:1032–41. [PMC free article] [PubMed] [Google Scholar]
- 30.Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. Journal of chronic diseases. 1987;40(5):373–83. doi: 10.1016/0021-9681(87)90171-8. [DOI] [PubMed] [Google Scholar]
- 31.Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. Journal of clinical epidemiology. 1992;45(6):613–9. doi: 10.1016/0895-4356(92)90133-8. [DOI] [PubMed] [Google Scholar]
- 32.Kottke TE, Baechler CJ. An algorithm that identifies coronary and heart failure events in the electronic health record. Preventing chronic disease. 2013;10:E29. doi: 10.5888/pcd10.120097. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.RStudio . RStudio. 0.98.501. ed2012 [Google Scholar]
- 34.Wickham H. The Split-Apply-Combine Strategy for Data Analysis. Journal of Statistical Software. 2011;40(1):1–29. [Google Scholar]
- 35.Dowle M, S T, Lianoglou S, Srinivasan A. data.table: Extension of data.frame. 1.9.2. ed2014 [Google Scholar]
- 36.Jaccard P. Nouvelles recherches sur la distribution florale. Bull Soc Vau Sci Nat. 1908;(44):223–70. [Google Scholar]



