Abstract
Objective
Problem-based charting (PBC) is a method for clinician documentation in commercially available electronic medical record systems that integrates note writing and problem list management. We report the effect of PBC on problem list utilization and accuracy at an academic intensive care unit (ICU).
Materials and Methods
An interrupted time series design was used to assess the effect of PBC on problem list utilization, which is defined as the number of new problems added to the problem list by clinicians per patient encounter, and of problem list accuracy, which was determined by calculating the recall and precision of the problem list in capturing 5 common ICU diagnoses.
Results
In total, 3650 and 4344 patient records were identified before and after PBC implementation at Stanford Hospital. An increase of 2.18 problems (>50% increase) in the mean number of new problems added to the problem list per patient encounter can be attributed to the initiation of PBC. There was a significant increase in recall attributed to the initiation of PBC for sepsis (β = 0.45, P < .001) and acute renal failure (β = 0.2, P = .007), but not for acute respiratory failure, pneumonia, or venous thromboembolism.
Discussion
The problem list is an underutilized component of the electronic medical record that can be a source of clinician-structured data representing the patient’s clinical condition in real time. PBC is a readily available tool that can integrate problem list management into physician workflow.
Conclusion
PBC improved problem list utilization and accuracy at an academic ICU.
Keywords: problem list, electronic medical record, problem-oriented charting, physician documentation, structured data
BACKGROUND AND SIGNIFICANCE
Much of the clinical data in current electronic medical records (EMRs) are stored in the form of unstructured free-text notes that are difficult to organize and process on a large scale, which prevents the EMR from being used as a reliable source of data for research, quality, and informatics applications for patient care, such as clinical decision support.1 Although standardized ontological systems are used to structure clinical data, clinicians often instead describe data in unstructured narratives that are better suited to the way many of them are trained to cognitively approach the highly contextual and nuanced nature of clinical medicine.2 How to create a system that effectively captures and structures high-quality clinical data in a way that is compatible with how clinicians are trained to think about and describe patients remains an important question in designing the next generation of EMRs.
The problem list is a component of the EMR that could potentially contribute to such a design. First described by Lawrence Weed3 in 1968, the problem list is a mental model for the way clinicians are trained to organize clinical information. Clinicians use their expertise to summarize clinical data into structured patient assessments in the form of problem lists, which are an important medium of data used to quickly communicate with other clinicians and formulate treatment plans. If linked to the EMR, electronic problem lists can be a source of structured clinical data used to more effectively communicate information for direct patient care, as well as for a broad array of informatics applications.4 For example, the electronic problem list in a patient’s chart can be an abbreviated summary of the patient’s most updated health condition, which can be quickly shared among different specialists and care teams to maintain continuity of care, compared to the method that most clinicians currently employ, manually searching through individual progress notes for each patient encounter to formulate the most updated problem list, which is then saved in another free-text progress note. Further, accurate electronic problem lists could improve physicians’ adherence to practice guidelines for chronic conditions such as heart failure5 and could be used as data inputs for informatics tools such as clinical decision support systems and data-mining algorithms.6 The Institute of Medicine has recommended that every EMR contain an updated problem list, a feature that is also required under Meaningful Use Stage 3.7
With most current EMR designs, however, electronic problem lists are often incomplete, inaccurate, and outdated, making them unreliable sources of clinical data and limiting their utility in unlocking the EMR’s data-processing potential.8 Although clinicians often do create problem lists when assessing patients, they are usually saved in unstructured free-text form in individual progress notes rather than in the electronic problem list.9 Several possible reasons exist for this pattern of use. While most modern EMR designs include an electronic problem list in each patient chart that can be modified by adding, deleting, and editing problems, doing so requires extra steps that must be performed in the EMR in addition to completing a progress note for the chart. Multiple physicians from different specialties are often involved in an individual patient’s care and access the same chart in the EMR, but it is often not clear which physician is responsible for performing the additional step of managing the problem list as part of the charting workflow. Results from one study suggest that this burden often falls disproportionately to primary care physicians,10 who already experience significant time pressures and have difficulty taking on this additional task consistently. Electronic problem lists that are inaccurate and incomplete further discourage clinicians from using and updating them, creating a vicious cycle that increasingly complicates problem list management.11
Several approaches to improving the electronic problem list have been reported. Some health care institutions have promoted electronic problem list management by linking problem list items to billing codes and offering financial incentives, but this strategy has had only limited success with problems describing major comorbidities that are associated with pay-for-performance metrics.4 Automated inference-based clinical decision support approaches to problem list management have also been described and hold promise, but require the creation of prespecified inference rules for each diagnosis and have not been used in any significant capacity in the clinical setting.12 Neither approach, however, directly addresses the question of how to better capture and structure the problem lists that many clinicians are already generating and storing in free-text progress notes.
Problem-based charting (PBC) is a note-writing tool in the Epic EMR system that integrates electronic problem list management into the process of completing patient encounter notes. In a similar fashion to the problem-oriented medical record first introduced by Weed,3 PBC is designed so that clinicians document and update assessments and plans under the associated clinical problems in the electronic problem list (Figure 1), rather than in a single note. Under this system, problem list management becomes a built-in step for clinicians during every patient encounter when assessments and plans are documented. Although PBC is an available feature in Epic and similar tools exist in other commercially available EMRs, it is not widely used as a charting method and there have been no studies evaluating its effectiveness in improving the electronic problem list.
Figure 1.

Overview of problem-based charting in Epic
(A) The clinician is able to chart the subjective and objective portions of a progress note using free text in the PBC template. (B) The assessment and plan portion of the progress note is written in free text under the corresponding problem list item in the PBC template. New problems can be added by searching the problem in a search box using free text, which then becomes added to the problem list. A progress note can then be automatically generated from the content inputted in the subjective and objective portions and the problem list. (C) The electronic problem list is automatically updated with the hospital problems added during PBC.
OBJECTIVE
In November 2013, PBC was introduced as the required method of charting for all clinicians staffing the intensive care unit (ICU) at Stanford Hospital using the Epic EMR system. We herein report an interrupted time series study at an ICU of an academic institution to evaluate the effectiveness of PBC in improving the utilization and accuracy of the electronic problem list.
METHODS
We conducted an interrupted times series study of patient encounters in the ICU at Stanford Hospital over 48 consecutive months from November 1, 2011 to November 1, 2015, which included 24 months before (pre-PBC) and after (post-PBC) the initiation of PBC as the standard method of charting by ICU clinicians. While it was technically possible to generate non-PBC notes in Epic, PBC use was strictly enforced by the ICU leadership, and attending physicians were instructed to not cosign resident notes unless they were written with PBC. The study was approved by the Stanford Institutional Review Board. No other institution-wide policies were implemented during this time related specifically to problem list management. Patient data were queried and extracted from the EMR via the Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse13 and hospital administrative records. Frequencies of total numbers of new problems for hospitalizations added to electronic problem lists by clinicians were ascertained per patient encounter to evaluate for problem list utilization. To assess problem list accuracy, we calculated the recall (sensitivity) and precision (positive predictive value) of 5 common ICU conditions – sepsis, acute respiratory failure, acute renal failure, pneumonia, and venous thromboembolism – for the portion of the problem list that contained newly added problems for hospitalizations with respect to the billing code list as the reference standard. The billing code list consisted of International Classification of Diseases, Ninth Revision (ICD9) codes that were independently extracted for each patient encounter by trained medical coders for billing and administrative purposes. Coders are trained to evaluate the entire patient chart, including notes and orders (but not the electronic problem list), to extract diagnostic codes after the patient encounter is completed. A manual chart review of 100 randomly selected pre- and post-PBC encounters was performed by 2 internal medicine physicians, and interrater agreement with the billing code list was determined by calculating percentage agreement and Cohen’s kappa for each diagnosis. The recall for each clinical condition was calculated by dividing the total number of patient encounters that contained the corresponding ICD9 codes in both the billing code list and the problem list (true positives) by the number of patient encounters that contained the corresponding ICD9 codes on the billing code list (prevalence). Precision, or positive predictive value, was calculated by dividing the true positives by the total number of encounters that contained corresponding ICD9 codes on the problem list. Statistical comparisons of outcomes between pre- and post-PBC periods were performed using t-tests, chi-squared tests, and interrupted time series analyses, using an ordinary least squares segmented regression model with reported beta coefficients representing the differences in the slopes of the pre- and post-PBC regression lines.14 The time series was constructed with the arithmetic means of observations over 3 consecutive-month periods (quarters).
RESULTS
In total, 3650 and 4344 patient encounters were identified in the pre-PBC and post-PBC periods, respectively. There were no significant differences in the distributions of age, gender, race, smokers, and length of stay between the 2 groups (Table 1).
Table 1.
ICU patient characteristics before and after the initiation of problem-based charting
| Pre-PBC (11/2011–11/2013) | Post-PBC (11/201311/2015) | |
|---|---|---|
| Total no. of patient encounters | 3650 | 4344 |
| Age (years) | 62 | 62 |
| Gender (% female) | 43 | 44 |
| Current tobacco use (%) | 10 | 10 |
| Ethnicity (%) | ||
| Caucasian | 57 | 54 |
| Black or African-American | 5 | 5 |
| Asian | 13 | 13 |
| Hispanic | 19 | 21 |
| Length of stay (days) | 11 | 11 |
There were no significant differences in patient population before and after PBC was initiated in the Stanford ICU.
Clinicians in the post-PBC period added a higher mean number of new problems per encounter compared to the pre-PBC period (6.2, standard deviation [SD] 4.7 vs 3.6, SD 3.2, P < .001). The interrupted time series analysis demonstrated that an increase of 2.18 problems (>50% increase) to the mean number of new problems added to the electronic problem list per patient encounter can be attributed to the initiation of PBC (Figure 2).
Figure 2.

Interrupted time series analysis of problem list utilization before and after the initiation of problem-based charting, as measured by the number of new problems added to the electronic problem list. There was a significant increase of 2.18 problems (>50% increase) to the mean number of new problems added to the problem list that can be attributed to the initiation of PBC.
We evaluated for the presence of change in the recall and precision of the electronic problem list in capturing 5 common ICU conditions after the initiation of PBC. The pre- and post-PBC frequencies of the patient encounters containing the corresponding ICU condition on the electronic problem list, the true positive patient encounters for each queried ICU condition (the queried ICU condition was present in both the electronic problem list and the billing code list), and the overall prevalence of those conditions (the queried ICU condition was present on the billing code list) along with the pre- and post-PBC recall and precision are listed in Table 2.
Table 2.
Arithmetic components and calculated values of recall and precision for each queried ICU condition before and after the initiation of PBC
| Clinical condition | No. of encounters on problem list | No. of encounters on billing code list (prevalence) | No. of encounters on problem list and billing code list (true positives) | Recall | Precision |
|---|---|---|---|---|---|
| Sepsis | 67 (2%) vs 421 (13%) | 753 (24%) vs 991 (30%) | 66 (2%) vs 401 (12%) | 9% vs 40% | 98% vs 95% |
| Acute respiratory failure | 435 (14%) vs 747 (22%) | 1190 (37%) vs 1208 (36%) | 419 (13%) vs 710 (21%) | 35% vs 59% | 96% vs 95% |
| Acute renal failure | 320 (10%) vs 611 (18%) | 894 (28%) vs 1064 (32%) | 310 (10%) vs 599 (18%) | 35% vs 56% | 97% vs 98% |
| Pneumonia | 189 (6%) vs 244 (7%) | 487 (15%) vs 505 (15%) | 143 (4%) vs 187 (6%) | 29% vs 37% | 76% vs 77% |
| Venous thromboembolism | 68 (2%) vs 132 (4%) | 197 (6%) vs 259 (8%) | 57 (2%) vs 110 (3%) | 29% vs 42% | 84% vs 83% |
The numbers of patient encounters and the percentages of the total study population are reported. Recall for all 5 ICU problems significantly increased in the post-PBC period, while precision remained unchanged.
The recall of the electronic problem list was higher in the post-PBC period for all 5 queried ICU conditions: sepsis (9% vs 40%, P < .001), acute respiratory failure (35% vs 59%, P < .001), acute renal failure (35% vs 56%, P < .001), pneumonia (29% vs 37%, P < .001), and venous thromboembolism (29% vs 42%, P < .001). The precision for all 5 conditions remained unchanged in the post PBC period: sepsis (96% vs 95%, P = .3), acute respiratory failure (95% vs 95%, P = .2) acute renal failure (97% vs 98%, P = .59), pneumonia (76% vs 77%, P = .3), and venous thromboembolism (84% vs 84%, P = .5). However, the interrupted time series analysis demonstrated that there was a significant increase in recall attributed to the initiation of PBC for sepsis (β = 0.45, P < .001) and acute renal failure (β = 0.2, P = .007), but not for acute respiratory failure, pneumonia, or venous thromboembolism (Figure 3). Interrater agreement between physician review and the billing code list is reported in Table 3.
Figure 3.
Interrupted time series analysis of problem list recall before and after initiation of PBC for 5 common ICU conditions (A–E). Although problem list recall increased over time in all cases, a significant effect was attributed to the initiation of PBC for sepsis and acute renal failure.
Table 3.
Interrater agreement between physician review and the billing code list for the 5 queried ICU diagnoses based on 100 randomly selected pre- and post-PBC encounters
| Clinical condition | No. observed from physician chart review | No. observed from billing code list | % agreement (observed) | % agreement (expected by chance) | Kappa statistic |
|---|---|---|---|---|---|
| Sepsis | 40 | 30 | 84% | 54% | 0.65 |
| Acute respiratory failure | 40 | 38 | 84% | 52.4% | 0.66 |
| Acute renal failure | 37 | 33 | 84% | 54.4% | 0.65 |
| Pneumonia | 12 | 6 | 94% | 83.4% | 0.63 |
| Venous thromboembolism | 23 | 13 | 86% | 70% | 0.53 |
More observations were captured for every diagnosis from physician review than from the billing code list. Kappa statistics were in the moderate to substantial range.
DISCUSSION
The electronic problem list is an underutilized component of many EMRs that can be a source of high-quality structured clinical data generated by clinicians at the time of patient encounters. An actively managed electronic problem list can serve as a dynamic table of contents for the medical record that reflects the evolving clinical condition of a patient in real time. The design and implementation of EMR interfaces that reorganize and structure information in such a way that can better support clinicians’ cognitive needs and workflow, as well as informatics applications such as point-of-care clinical decision support tools and EMR phenotyping, is an important step toward achieving our national priority of developing better technologies to capture and organize electronic clinical data for a learning health care system.15,16 Our study demonstrates that PBC, a readily available, albeit not widely used, feature of commercially available EMR platforms such as Epic, can improve both problem list utilization and accuracy, suggesting that it may be a viable tool to help clinicians more effectively manage electronic problem lists.
Our data show a significant increase in problem list utilization and accuracy for multiple ICU diagnoses after PBC was introduced. Using interrupted time series analysis, we find that for 2 of the 5 ICU conditions studied, sepsis and acute renal failure, the increase in recall can specifically be attributed to when PBC was initiated, indicating that PBC encouraged clinicians to more reliably add these problems to the electronic problem list. For acute respiratory failure, pneumonia, and venous thromboembolism, however, the higher post-PBC recalls appear to be due to increased secular trends in gradual recall even before PBC was initiated, suggesting that factors other than PBC contributed to the increased addition of these problems to the problem list. The post-PBC recalls for all diagnoses appeared to reach a ceiling in the 0.5–0.6 range, so it is possible that the PBC-specific effect on recall for acute respiratory failure, pneumonia, and venous thromboembolism were blunted due to higher pre-PBC recalls that already approached this observed ceiling. Importantly, precision (positive predictive value) of the analyzed diagnoses remained high in all cases, reflecting no trade-off in precision for the improved recall observed.
An accurate and succinct electronic problem list may have broad applications for helping clinicians organize and interpret clinical data for patient care, and informaticians utilize the EMR for research and innovation. Reviewing and documenting clinical data requires a high degree of cognitive synthesis by clinicians, which would be greatly aided by EMR interfaces that help organize the data in a clinically meaningful way.17 The problem list is a natural form of such organization of otherwise highly unstructured and narrative clinical data, because it is already a component of the way many clinicians are trained to formulate diagnostic and therapeutic plans.18 One example of how the PBC interface helps to improve organization of narrative clinical data is that it allows clinicians to sort past assessments and plan notes charted under the electronic problem list by clinical problem of interest, which may be particularly useful for extracting relevant information from long and complex patient charts. For example, PBC would enable a clinician interested in reviewing prior clinical assessments for only one particular problem in a patient with multiple comorbidities to select and view prior notes linked only to the diagnosis of interest in the electronic problem list, rather than requiring the clinician to manually sort through the large volume of otherwise irrelevant information accumulated in all of the patient’s prior progress notes (Figure 4).
Figure 4.
PBC allows for prior narrative components of clinician-generated assessments in a patient’s chart to be linked to the relevant problem on the electronic problem list and organized in a longitudinal fashion. The example illustrates how a physician using PBC can select a problem of interest, such as hypertension, and view all prior written assessments for that problem in one window, rather than having to individually toggle through prior progress notes that may contain information irrelevant to the current clinical encounter.
Informatics-oriented strategies for improving problem list accuracy have been described in the literature, including employing inference rules12 and natural language processing.19 These methods rely on using preprogrammed heuristics to process data in the EMR after they have been entered in some form by clinicians during patient encounters. For example, Wright et al.10 described a set of inference rules for a predefined list of clinical diagnoses generated by a team of knowledge engineers and clinical experts that were used to generate problem list items from raw, unstructured EMR data. While these methods have indeed shown promise in improving problem lists, their reliance on predefined heuristics to process data in a post hoc fashion may limit their scope and applicability, and exposes them to inherent biases that affect the accuracy of the outputs. Clinicians undergo years of training and practice to learn how to synthesize highly messy and contextual data from clinical encounters, ranging from well-defined laboratory values to subtle aspects of a patient’s history, into higher-order structured data, such as lists of diagnoses and problems to address. The process of creating and validating individual rules for each clinical diagnosis in an attempt to simulate how a human clinician thinks is resource-intensive and often imperfect, as it is difficult to incorporate into an algorithm the level of nuance and cognitive flexibility that clinicians employ when interpreting data. Further, these post hoc methods use only data that ended up being stored in the EMR and cannot, for example, process information about patients that may have been witnessed by clinicians during encounters but not documented, thereby resulting in possible systematic biases due to the retrospective nature of such methods. PBC, on the other hand, harnesses the direct cognitive power of clinicians to structure EMR data by creating a documentation interface that better enables the EMR to prospectively electronically capture the structured data that clinicians are already generating while taking care of patients. For example, PBC allows clinicians to electronically add, delete, and resolve problems on the problem list as they occur at the point of care as they are writing patient assessments, which can generate a highly granular temporal dimension for EMR data. Further, by having clinicians document directly under the electronic problem list, PBC enables them to electronically tag the narrative portions of clinical assessments with the corresponding diagnoses on the list. This feature of expert-performed labeling of narrative clinical data in real time at the point of care may have broad applications for EMR phenotyping.20,21
It is important to note that our study was conducted in the ICU of an academic teaching hospital, a clinical setting with characteristics that may limit the successful adoption of PBC in nonteaching and non-ICU settings. In the Stanford ICU, both the house staff (the residents or fellows in training, who are primarily responsible for writing the initial notes for patient encounters) and supervising attending physicians are able to modify electronic problem lists. While we were unable to distinguish in our study whether electronic problem lists were modified by house staff or attending physicians, we suspect that much of the list management prior to the initiation of PBC was performed by attending physicians as they were reviewing house staff notes to assist in billing, while the increase in list utilization after PBC was likely primarily driven by the house staff, since under PBC they had to modify the list in order to complete the initial assessment and plan portion of the note. PBC may have a stronger effect on problem list utilization in nonteaching clinical settings, where attending physicians are typically responsible for writing the entire note, and, compared to attending physicians in academic institutions with house staff who perform most of the charting, may be more subject to time pressures limiting their ability to also update the problem list using traditional charting methods. Further, critical care clinicians are often trained to describe assessments by organ system rather than by problem, due to the complexity of disease states in critically ill patients, which is more difficult to chart with this version of PBC. For example, it would be more clinically meaningful to chart one composite assessment and plan for a patient with pneumonia who developed acute respiratory failure than to chart separately for each problem. A manual review of random patient charts in our study after the initiation of PBC indeed revealed that clinicians would often add one principal problem to the problem list, such as sepsis or acute respiratory failure, and chart the rest of the related diagnoses, such as pneumonia, using free text in the assessment and plan without separately adding it to the problem list. This workaround may blunt the effect of PBC on problem list utilization and accuracy, and could partially explain the progressive decrease in these metrics after the initial bump observed in our interrupted time series analysis, which may reflect decreasing clinician adherence to the use of PBC after its initial adoption in the ICU. These limitations, however, may not necessarily apply to non-ICU clinical settings, such as outpatient clinics, where clinicians are more likely to describe their assessments by clinical problem.
Limitations
There are several limitations to our study. While the interrupted time series analysis is a robust quasi-experimental design, it does not control for confounders that correlate temporally with our intervention. For example, a hospital-wide initiative to identify sepsis was initiated at Stanford around the time PBC was introduced in the ICU, which may have increased awareness of sepsis among clinicians and contributed to the dramatic rise in the inclusion of sepsis on electronic problem lists after PBC. Further, the billing code list, while extracted independently at the time of patient encounters by trained coders, is an imperfect independent reference standard. We found in our manual chart review that agreement between physician review and billing code list in detecting the 5 diagnoses was in the moderate to substantial range. Coders are trained to follow strict rules for extracting diagnostic codes, and therefore may miss diagnoses that were present during patient encounters because of how they were documented by clinicians, resulting in false negatives that may understate problem list precision. Indeed, more observations were captured for each diagnosis from physician review than from the billing code list among the 100 encounters we reviewed. There may also be false positives in the billing code list. Of note, the increase in problem list recall may be understated, because the billing code list includes diagnoses captured from the entire hospital encounter, including after transfer out of the ICU, while we expect PBC’s effect on problem list utilization to be mostly in ICU diagnoses, since PBC was not enforced elsewhere in the hospital. Further, while coders do not directly use electronic problem lists to extract codes for billing code lists, they do use progress notes, which, under PBC, would contain elements of electronic problem lists that clinicians used for the chart and may bias problem list precision, but not likely recall. User satisfaction with PBC also needs to be explored in different clinical settings. Structured documentation templates in the EMR are typically less accepted by clinicians, since most value expressivity and flexibility in how they describe patients.2 PBC, however, enables clinicians to chart the subjective and assessment portions of notes using free text, thereby preserving the components of notes that typically require expressivity, while allowing for the inherently structured components, problem lists, to be electronically structured in the EMR. We expect that PBC usability will be higher in clinical settings, such as outpatient clinics and general medicine wards, where clinicians are typically trained to think about patients by clinical problem, than in the ICU, where our study was based.
CONCLUSIONS
PBC is a readily available charting tool in commercially available EMRs that clinical institutions can adopt to improve the usage and accuracy of electronic problem lists. In doing so, institutions may be better positioned to unlock the informatics capabilities of their EMRs by enabling clinicians to connect narrative clinical assessments to structured electronic clinical data in real time at the point of care.
DISCLOSURE
Funding Statement: This research was supported by a grant from the Stanford Society for Physician Scholars.
JHC was supported in part by National Institutes of Health Big Data 2 Knowledge award number K01ES026837 through the National Institute of Environmental Health Sciences.
Patient data were extracted and deidentified by Gomathi Krishnan of the STRIDE project, a research and development project at Stanford University to create a standards-based informatics platform supporting clinical and translational research. The STRIDE project described was supported by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1 RR025744.
The sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, and approval of the manuscript. Content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or Stanford Healthcare.
Competing Interests Statement: The authors have no competing interests to declare. None of the authors has any financial affiliations with Epic Systems Corporation.
Contributorship Statement: RCL conceived the study, performed the analysis, and drafted the manuscript. TG, TC, LS, DF, and JHC contributed to analysis design, chart review, and manuscript revisions. JHC supervised the study.
References
- 1. Xu J, Rasmussen LV, Shaw PL et al. , Review and evaluation of electronic health records–driven phenotype algorithm authoring tools for clinical and translational research. J Am Med Inform Assoc. 2015;226:1251–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Rosenbloom ST, Denny JC, Xu H, Lorenzi N, Stead WW, Johnson KB. Data from clinical notes: a perspective on the tension between structure and flexible documentation. J Am Med Inform Assoc [Internet]. 2011;182:181–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Weed LL. Medical records that guide and teach. N Engl J Med. 1968;278:593–600. [DOI] [PubMed] [Google Scholar]
- 4. Wright A, Mccoy AB, Hickman TT et al. , Problem list completeness in electronic health records: a multi-site study and assessment of success factors. Int J Med Inform. 2015;8410:784–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Hartung DM, Hunt J, Siemienczuk J, Miller H, Touchette DR. Clinical implications of an accurate problem list on heart failure treatment. J Gen Intern Med. 2005;202:143–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Wright A, Goldberg H, Hongsermeier T, Middleton B. A description and functional taxonomy of rule-based decision support content at a large integrated delivery network. J Am Med Inform Assoc. 2007;144:489–96. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.HITPC Meaningful Use Stage 3 Final Recommendations. 2015. www.healthit.gov/facas/health-it-policy-committee/health-it-policy-committee-recommendations-national-coordinator-health-it.
- 8. Singer A, Yakubovich S, Kroeker AL et al. , Data quality of electronic medical records in Manitoba: do problem lists accurately reflect chronic disease billing diagnoses? J Am Med Inform Assoc [Internet]. 2016;174:412–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Vleck TT Van, Phil M, Wilcox A et al. , Content and structure of clinical problem lists: a corpus analysis. AMIA Annu Symp Proc. 2008;2008:753–57. [PMC free article] [PubMed] [Google Scholar]
- 10. Wright A, Feblowitz J, Maloney FL, Henkin S, Bates DW. Use of an electronic problem list by primary care providers and specialists. J Gen Intern Med. 2012;278:968–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Wright A, Maloney FL, Feblowitz JC. Clinician attitudes toward and use of electronic problem lists: a thematic analysis. BMC Med Inform Decis Mak [Internet]. 2011;111:36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Wright A, Pang J, Feblowitz JC et al. , Improving completeness of electronic problem lists through clinical decision support: a randomized, controlled trial. J Am Med Inform Assoc. 2012;194:555–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Lowe HJ, Ferris TA, Hernandez PM, Weber SC. STRIDE: An integrated standards-based translational research informatics platform. AMIA Annu Symp Proc. 2009;2009:391–95. [PMC free article] [PubMed] [Google Scholar]
- 14. Penfold RB, Zhang F. Use of interrupted time series analysis in evaluating health care quality improvements. Acad Pediatr. 2013;13(6 Suppl):S38–44. [DOI] [PubMed] [Google Scholar]
- 15. Cusack CM, Hripcsak G, Bloomrosen M et al. , The future state of clinical data capture and documentation: a report from AMIA’s 2011 Policy Meeting. J Am Med Inform Assoc. 2013;201:134–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Institute of Medicine. Best Care at Lower Cost [Internet]. 2012: 19. http://ucf-rec.org/wp-content/uploads/2012/09/IOM-Report9-6-12.pdf. Accessed July 21, 2017. [Google Scholar]
- 17. Mamykina L, Vawdrey DK, Stetson PD, Zheng K, Hripcsak G. Clinical documentation: composition or synthesis? J Am Med Inform Assoc [Internet]. 2012;196:1025–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Kaplan DM. Clear writing, clear thinking and the disappearing art of the problem list. J Hosp Med. 2007;24:199–202. [DOI] [PubMed] [Google Scholar]
- 19. Meystre S, Haug P. Improving the sensitivity of the problem list in an intensive care unit by using natural language processing. AMIA Annu Symp Proc. 2006;2006:554–48. [PMC free article] [PubMed] [Google Scholar]
- 20. Hripcsak G, Albers DJ. Next-generation phenotyping of electronic health records. J Am Med Inform Assoc. 2013;201:117–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Halpern Y, Horng S, Choi Y, Sontag D. Electronic medical record phenotyping using the anchor and learn framework. J Am Med Inform Assoc [Internet]. 2016;23:731–40. [DOI] [PMC free article] [PubMed] [Google Scholar]


