Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Oct 1.
Published in final edited form as: Int J Med Inform. 2015 Jul 17;84(10):784–790. doi: 10.1016/j.ijmedinf.2015.06.011

Problem list completeness in electronic health records: a multi-site study and assessment of success factors

Adam Wright 1,2,3, Allison B McCoy 4, Thu-Trang T Hickman 1, Daniel St Hilaire 3, Damian Borbolla 5, Len Bowes 6, William G Dixon 7, David A Dorr 8, Michael Krall 9, Sameer Malholtra 10, David W Bates 1,2,3, Dean F Sittig 11
PMCID: PMC4549158  NIHMSID: NIHMS711546  PMID: 26228650

Abstract

Objective

To assess problem list completeness using an objective measure across a range of sites, and to identify success factors for problem list completeness.

Methods

We conducted a retrospective analysis of electronic health record data and interviews at ten healthcare organizations within the United States, United Kingdom, and Argentina who use a variety of electronic health record systems: four self-developed and six commercial. At each site, we assessed the proportion of patients who have diabetes recorded on their problem list out of all patients with a hemoglobin A1c elevation >= 7.0%, which is diagnostic of diabetes. We then conducted interviews with informatics leaders at the four highest performing sites to determine factors associated with success. Finally, we surveyed all the sites about common practices implemented at the top performing sites to determine whether there was an association between problem list management practices and problem list completeness.

Results

Problem list completeness across the ten sites ranged from 60.2% to 99.4%, with a mean of 78.2%. Financial incentives, problem-oriented charting, gap reporting, shared responsibility, links to billing codes, and organizational culture were identified as success factors at the four hospitals with problem list completeness at or near 90.0%.

Discussion

Incomplete problem lists represent a global data integrity problem that could compromise quality of care and put patients at risk. There was a wide range of problem list completeness across the healthcare facilities. Nevertheless, some facilities have achieved high levels of problem list completeness, and it is important to better understand the factors that contribute to success to improve patient safety.

Conclusion

Problem list completeness varies substantially across healthcare facilities. In our review of EHR systems at ten healthcare facilities, we identified six success factors which may be useful for healthcare organizations seeking to improve the quality of their problem list documentation: financial incentives, problem oriented charting, gap reporting, shared responsibility, links to billing codes, and organizational culture.

Keywords: electronic health records, problem lists, diabetes, quality

1. Introduction

Since the introduction of the problem-oriented medical record by Lawrence Weed in his landmark article “Medical Records that Guide and Teach” (1), problem lists have become standard in nearly all medical record systems. Although problem lists are designed principally for use in patient care, and they are especially important in primary care, an accurate and complete problem list has many other uses. Problem lists can be used to create patient registries, identify patient populations for quality improvement activities, or conduct research (24). Many clinical decision support (CDS) rules also depend on accurate, complete, and coded problem lists (511). They can also be shared directly with patients to improve their engagement in their care (12). Furthermore, evidence suggests that more complete and accurate problem lists may improve quality of care (13).

Consider the case of a hypothetical patient with diabetes. If diabetes is properly documented on his or her problem list, it may trigger CDS tools which help remind the patient’s care providers to assess for nephropathy or retinopathy, to measure his or her cholesterol and assess for the risk of heart disease, and to more tightly monitor blood pressure. The presence of diabetes on the problem list may also trigger inclusion in care management programs, registries, and research studies. And, of course, an accurate listing of diabetes on the problem list will inform other care providers, including specialists, covering providers, and emergency physicians who do not otherwise know the patient, that he or she has diabetes. If the same patient’s diabetes is omitted from the problem list, he or she would receive none of these benefits. Appreciating this, the ability to create and maintain a problem list is required for electronic health record (EHR) certification under the Office of the National Coordinator for Health Information Technology’s Authorized Certification Body process (14). Moreover, maintaining a “complete” problem list is a requirement for stages 1 and 2 of the “meaningful use” (15, 16) financial incentive program for EHR adoption in the United States (US), with stage 3 of meaningful use expanding the criteria to include regular review and reconciliation of problem concepts (17).

Despite these benefits and incentives, problem lists are often inaccurate, incomplete, and out-of-date (1820), leading providers to find it a major struggle to keep them current. A prior study conducted at the Brigham and Women’s Hospital (BWH) in Boston, MA found that problem list completeness for outpatients ranged from a low of 4.7% for renal insufficiency or failure, to 50.7% for hypertension, to 78.5% for breast cancer (18). In a qualitative study, we found that when problem lists are incomplete, providers stop relying on them and, in turn, stop updating them, perpetuating a vicious cycle of problem list inaccuracy (21). To date, no systematic investigation of problem list completeness across sites has been conducted, nor have success factors for improving problem list completeness been identified. In this article, we report on these dual investigations conducted to further explore problem list completeness.

2. Methods

We used a three-pronged approach to study problem list completeness for diabetes and hemoglobin A1c (HbA1c) at ten healthcare facilities in the US, United Kingdom (UK), and Argentina. We began with a retrospective analysis of EHR data at the sites to establish a measure of problem list completeness for diabetes. For those sites which had high completeness relative to our sample, we then conducted interviews with informatics leaders to determine facilitators of their success. We focused our interviews on the highly successful sites because we felt that they would be most informative in defining success factors; similar “positive deviance” approaches have been used successfully to study a variety of healthcare problems (22, 23). Finally, we surveyed informatics leaders at all ten sites in our study about their use of the identified facilitators of success.

2.1 Retrospective analysis

We investigated diabetes, and HbA1c in particular, for two reasons. First, diabetes is an important chronic condition and is often the subject of research, quality measurement and CDS, so ensuring coded documentation of diabetes is important. Second, the American Diabetes Association (ADA) has promulgated a guideline which states that a HbA1c of 6.5% or greater is diagnostic for diabetes (24) – since laboratory results are coded in most EHRs (25), this makes it relatively straightforward to identify patients who are diabetic, regardless of whether diabetes is on their problem list. The study sites were chosen by purposive sampling in an effort to include sites that were diverse in geography, EHR system in place, and type of health system and able to report the data required for the study. By necessity, all sites selected used an EHR system and had the ability to perform a query based on laboratory results and problem list entries but were otherwise diverse (see Table 1).

Table 1.

Characteristics of participating sites

Type of Facility Site City Country EHR
Academic Medical Center University of Nottingham Nottingham United Kingdom EMIS
Oregon Health and Science University Portland, OR United States Epic
University of Texas Faculty Practice Plan Houston, TX United States Allscripts
Weill Cornell Medical College New York, NY United States Epic
Community Hospital Hospital Italiano Buenos Aires Argentina Self-developed
Salford Royal Foundation Trust Salford, Greater Manchester United Kingdom Allscripts
Wishard Hospital (Eskenazi Hospital since 2013) Indianapolis, IN United States Self-developed
Regional Health System Intermountain Healthcare Salt Lake City, UT United States Self-developed
Kaiser Permanente Northwest Portland, OR (providing care in OR and WA) United States Epic
Partners HealthCare Boston, MA United States Self-developed

For the retrospective analysis of EHR data, we asked each participating site to report on two quantities:

  1. The number of patients at the site who have had at least one outpatient encounter between 1/1/2009 and the date at which the site’s data warehouse was last refreshed (this varied by site) who have also had at least one HbA1c >= 7.0% since 1/1/2009

  2. Of the patients in (1), the number of these patients who also have diabetes coded on their problem list

We then used these two quantities to calculate a ratio which represents the proportion of patients with an HbA1c >= 7.0% who have diabetes on their problem list as a measure of problem list completeness. We chose to increase the HbA1c threshold to 7.0% to account for situations where a provider is treating a patient as pre-diabetic, even though they meet the ADA’s diabetes diagnostic criteria. Our goal was to establish a single, repeatable metric with high specificity which could be compared across sites, so we selected HbA1c because it is widely used, has a standardized interpretation, and is consistently available in structured, coded form. Some diabetics may never have had an HbA1c of greater than 7.0%, so our screen is not perfectly sensitive (i.e., not all patients who should have diabetes on their problem list were captured), nor does it allow us to identify situations in which patients have diabetes on their problem list but do not actually have diabetes (false positives); however, HbA1c represented the best single criterion for consistently assessing diabetes across many diverse sites, and is a highly specific measure, since HbA1c values above 7.0% are diagnostic for diabetes according to ADA guidelines.

2.2 Interviews at top performing sites

Second, we conducted two-part interviews with informatics leaders or EHR users at all of the sites to (1) learn more about their problem lists and how they are maintained and (2) to ask the sites what they felt contributed to their relative success in maintaining them, at least for diabetes. To learn more about the sites’ problem lists, we asked the following:

  1. Who is responsible for keeping the problem list up to date at your organization?

  2. Is your problem list manually maintained or derived from billing data?

  3. Do you have any policies / guidelines about problem list ownership / use?

  4. Do you use a standard terminology to code your problem list? If so, which one?

  5. Do you allow free text problems?

  6. Who is your EMR vendor (if any)?

  7. Have you received your meaningful use stage 1 incentive payment? If so, did you qualify under Medicare or Medicaid rules?

The interviews were conducted by phone or email depending on the availability of the interviewee. We used an open-ended approach to learn about how the sites assessed their use of the problem list. We asked each site about three key areas: cultural attitudes about the problem list, incentives for maintaining a complete problem list, if any, and tools and practices employed to enhance problem list completeness.

2.3 Survey of all study sites regarding best practices

Finally, we categorized the facilitators of success implemented across the high performing sites into six categories: financial incentives, problem-oriented charting, gap reporting, shared responsibility, links to billing codes, and organizational culture. To determine whether these practices were specific to the high performing sites, we surveyed informatics leaders and EHR users at all ten study sites via email about whether their healthcare facility employs practices that fall into any of the six categories of success facilitators.

Each site that participated in the study agreed to have the results of the retrospective analysis and their identity published, but without linkage between the two; therefore, results are presented in de-identified form. The study was reviewed and approved by the Partners HealthCare Human Subjects Committee. Most sites relied on this approval; however, one site also submitted and received approval from its local Institutional Review Board.

3. Results

A total of ten sites in the US, UK, and Argentina participated in this study. The characteristics of the participating sites are presented in Table 1. Seven of the sites were in the US (three in the West, two in the Northeast, one in the Southwest and one in the Midwest), two in the United Kingdom, and one in Argentina. Four sites used locally-developed EHRs and the remainder used a variety of vendor systems. The sites included small practices, community hospitals, regional health systems, and academic medical centers.

The results of the retrospective analysis are presented in Table 2. To preserve anonymity, the sites are sorted in decreasing order of problem list completeness. The range of problem list completeness extended from a low of 60.2% to a high of 99.4%. The top-performing site (which also saw the fewest patients) stood out with only two patients whose HbA1c exceeded the threshold of 7.0% who did not have diabetes on their problem list.

Table 2.

Diabetes problem list completeness

Site Patients with at least 1 HbA1c > 7.0% Patients with at least 1 HbA1c > 7.0% AND diabetes on problem list N (%)
1 330 328 (99.4%)
2 33,688 32,264 (95.8%)
3 11,290 10,346 (91.6%)
4 9,585 8,319 (86.8%)
5 3,503 2,831 (80.8%)
6 7,337 5,880 (80.1%
7 50,022 37,593 (75.2%)
8 32,135 20,340 (63.3%)
9 2,001 1,220 (61.0%)
10 10,450 6,290 (60.2%)
Total 160,341 125,411 (78.2%*)
*

78.2% is a weighted average of the completeness across the sites, weighting sites with more patients with high HbA1c’s more heavily. The simple average across sites is 79.4%.

We drew a line between the top four sites (completeness of 99.4%, 95.8%, 91.6% and 86.8%) and the other six (completeness ranging from 60.2% to 80.8%), reasoning that completeness at or near 90% represented high performance relative to our sample. Four different EHR systems were used by the top sites: two self-developed systems and two different commercial systems, suggesting that EHR software selection, alone, was not responsible for the differences. We interviewed informatics leaders and EHR users at these top performing sites to learn more about their EHRs, and to better understand the facilitators of success with regard to their problem list practices. The success factors that recurred at most of the top performing sites were aggregated into six categories:

  1. Financial incentives: Two of the four top-performing sites had financial incentives related to problem list completeness. In one case, the site had a program for chronic diseases, including diabetes. The responder explained, “Our [pay for performance program] effectively incentivizes us to keep accurate problem lists, especially for major morbidities. This is because quality payments are partly driven by the number of patients on any particular morbidity register, e.g. diabetes, hypertension etc. We are not really incentivized to keep problem lists for more minor morbidities, but because [pay for performance] covers quite a lot of morbidities it is easier to record morbidities for everything.” The other system had financial contracts that featured risk adjustments based on the chronic diseases a patient had, meaning that greater reimbursement and, in turn, potential physician bonuses, depended on complete documentation of problems, including diabetes.

  2. Problem-oriented charting: The top-performing site used a mandatory version of problem-oriented charting stating, “The way in which the electronic records are structured means that we are encouraged to record each of the problems the patient presents with before recording history, examination, medications, investigations, formulation etc.” This system provides a strong forcing function to record problems, including diabetes, because otherwise there is no place to enter documentation.

  3. Gap reporting: Two of the four sites generated regular reports of patients who appeared to have various chronic conditions, including diabetes, but did not have the condition on their problem list, and share these reports to providers. These reports, which one site called “gap lists” could then be used to update patient problem lists.

  4. Shared responsibility: Most sites depended entirely on physicians to maintain the problem list. However, two of the top four sites also had care managers update the problem lists. For example, if a patient is followed by a diabetes care management program, the care manager would ensure that diabetes appeared on his or her problem list. One of the sites also generates reports of patients potentially eligible for care management programs, including patients with high HbA1c scores, combining both the gap reporting and shared responsibility practices.

  5. Links to billing codes: Most sites separate the problem list from encounter-based diagnosis coding for billing; however, one of the top sites automatically feeds billing diagnoses to the problem list. This results in a high rate of problem list completeness, as clinicians usually remember to bill patients for diabetes, even if they might not otherwise add it to the problem list. One drawback of this approach is that, if a patient is billed for multiple related ICD-9 codes (e.g. “Diabetes mellitus”, “Diabetes mellitus without mention of complication” and “Diabetes mellitus without mention of complication, type II or unspecified type, uncontrolled”) over several visits, the problem list can become cluttered with near duplicate terms.

  6. Organizational culture: A final and harder-to-characterize practice reported at several of the top sites was simply an organizational culture or practice of assiduous use of the problem list within and across groups. In these organizations, use of the problem list was simply expected, and widely practiced. Moreover, at these sites, both primary care providers and specialists considered themselves to have shared responsibility for problem list maintenance. We observed a similar phenomenon in a prior ethnographic study of problem list usage at BWH, where certain practices and specialties had a culture of problem list usage, often due to leadership or peer expectations (21), and others did not.

After identifying these categories of success facilitators, we then surveyed each study site, regardless of performance, about their use of each practice. The results of this survey are shown in Table 3. In general, the four top-performing sites made more extensive use of the best practices than the lower-performing sites. However, Site 3 stood out – although they had a high degree of problem list completeness, they only used two of the best practices. Our interviews identified that Site 3, in particular, had a very strong link between billing data and the problem list. In fact, they automatically push all clinician-entered billing diagnoses onto the problem list. In our interview with them they highlighted that, while this creates a problem list with high sensitivity, it can also lead to clutter (e.g. if several diabetes-related problems are on the problem list) or inaccuracy (if an errant billing diagnosis makes it to the problem list). The other sites that used billing linkages required either that clinicians manually “promote” billing diagnoses to the clinical problem list, or at least that they verify proposed promotions. Several of the sites provided interesting open-ended responses to the survey; highlights are presented in Table 4.

Table 3.

Best practices survey responses by site

Practice Site 1 Site 2 Site 3 Site 4 Site 5 Site 6 Site 7 Site 8 Site 9 Site 10
Financial Incentives Yes No No Partial No No Yes Partial No No
Problem-Oriented Charting Yes *Partial Yes Yes Yes No No No *Optional Yes
Gap Reporting *Partial Yes No Yes No No No No No No
Shared responsibility Yes Yes No Yes No Yes Yes No No No
Links to Billing Yes No Yes No No Yes No No Yes No
Culture *Can’t say Yes No Yes No Yes No No *Improving No
a

Partial: Occurs only to a minor extent or only in certain practices at the site

b

Optional: This feature is available, but not mandatory or used across the entire site

c

Improving: The culture is continuously evolving and improving

d

Can’t say: The responder did not feel that s/he had enough information to generalize

Table 4.

Representative quotes regarding problem list best practices

Best Practice Category Respondent Response

Financial Incentives
  • Our [pay for performance program] does effectively incentivize us to keep accurate problem lists, especially for major morbidities. This is because quality payments are partly driven by the number of patients on any particular morbidity register, e.g. diabetes, hypertension etc. We are not really incentivized to keep problem lists for more minor morbidities, but because [pay for performance] covers quite a lot of morbidities it is easier to record morbidities for everything.

  • Only one [of our medical services] has an incentive program to improve medical records, and patient problem list is within the [program] to evaluate

Problem-Oriented Charting
  • The way in which the electronic records are structured means that we are encouraged to record each of the problems [that] the patient presents with before then recording history, examination, medications, investigations, …. So, I would agree that entering problems [in the] computer definitely drives/supports documentation.

  • Available but not used

Gap Reporting
  • Yes, especially for chronic conditions like diabetes and hypertension

  • A good idea but I’m not aware we’ve done it to date – We’d probably do it with CDS rather than reports… just to hit the MD at the time they’re working with the patient.

Shared responsibility
  • In [primary care] other people do contribute to the problem list and this would most commonly be practice nurses. Also, most practices have a system for ensuring that new hospital-generated diagnoses are recorded in the [physician] record, e.g. by having records clerks record this information.

  • I’m not aware that we’ve done this locally.

Links to Billing
  • Billing diagnoses entered by the MD are treated as problems. I’m not aware that we add [diagnoses identified by chart abstracts for billing purposes] to what we consider a clinician-maintained problem list.

  • NO. We don’t automatically add billing diagnoses to the patient’s problem list. The problem list is typically used to create the billing diagnosis on the encounter form.

Culture
  • Yes, since the beginning of the implementation clinicians were trained to use the problem list. There are also reports that are generated from the problem list, so when a patient is not included in the report they know the cause

  • This has been evolving; a relative push was made in 2010 by providing a richer more organized problem list (by system, problem details) and encouraging [problem-based charting] through announcements etc. Also, we have our internal 10 commandments of effective use of EHR and other benchmarks which emphasize maintaining a clean and accurate problem list.

  • I wouldn’t classify the average clinician as demonstrating a culture of assiduousness. Problems are gradually added, but have traditionally been much less likely to come off the list. So, there’s a tendency for “old patients” to have problems on their problem list that were long since resolved. There is an important small minority of primary care givers, however, who do feel strongly about an up to date problem list.

4. Discussion

One important struggle that providers face in using EHRs is maintaining a complete and up-to-date problem list, compromising data integrity, and creating the potential for compromised quality and safety. Based on our study, we conclude that the difficulty of problem list incompleteness is widespread – only three of ten sites had greater than 90% problem list completeness even for the straightforward diagnosis we studied. There were significant differences among the sites, with performance ranging from 60.2% to 99.4%, suggesting that many sites have significant room for improvement in the completeness and accuracy of their problem lists, perhaps using the success factors we have identified.

Clinical problem list gaps are a key example of data integrity issues; however, many other areas of the health record, including the allergy list, medication list, family and social history information and patient demographics can also be out of date or incorrect. Further research should explore these other areas of potential compromise.

A further question for future research is whether the success factors we identified represent best practices. Although we believe that most of them are best practices, further study is needed around three issues. First, some of the success factors, such as generated gap reports, are readily translated to other organizations but others, most especially organizational culture, are less portable. In addition, organizational culture related to EHR usage and safety is difficult to measure (26); several responders were unable to comment regarding the problem list use culture at their sites or explained that it is “evolving” or that “there are a handful of real enthusiasts about coding in the hospital - but many who are not.” Further study is needed to determine how a culture of problem list usage can be replicated. The other four success factors (problem-oriented charting, gap reporting, shared responsibility and billing codes) were not widely used outside of the top four but, likewise, no single success factor was employed by all sites in the top four. Second, our current study does not give us a robust method or measure of the necessity or sufficiency of each success factor, and some may be more effective than others. From our experience, several sites not in the top four also implemented some of the same practices as the top performing sites such as risk-adjustment and programs tied to the problem list, even though they did not achieve such high levels of performance. Third, some of the success factors, most notably tying the problem list to billing codes, may have unintended consequences such as problem list clutter. Despite these caveats, however, we believe that organizations looking to improve problem list completeness should consider implementing some of these practices.

4.1 Limitations

Our study has several limitations. First, although our sample was relatively large and diverse, it was not necessarily representative. The ten healthcare facilities in this study all had EHR systems, the ability to query them (this is not universal), and a desire to collaborate on this study – as such, they may be further along in system maturity than sites without these abilities and interests. Moreover, self-developed EHR systems were overrepresented in our sample. Though this is not, in itself, a limitation, it does bear on the generalizability of our sample, and there may still be success factors (or barriers) that we did not uncover. Second, given our method, we were not able to quantitatively measure the impact of each success factor with our sample and methodology, which was designed to be preliminary and hypothesis-generating rather than definitive. A much larger-scale retrospective analysis, in the spirit of the analysis presented in Table 3, would be needed to quantify the impacts of the success factors and translate them into robust best practices. Third, our study focused on a single disease and a single diagnostic criterion – future studies might consider additional diseases. However, identifying diseases with even a single, measureable, unambiguous diagnostic criterion, such as HbA1c for diabetes, is difficult, so studies that tried to measure multiple diseases might have to use chart review to establish gold standard diagnoses (as we did in a prior study (18)), the expense of which might necessarily limit the number of sites that could be included.

4.2 Conclusion

Incomplete problem lists are a widespread issue and threaten patient safety. Given the importance of problem lists, particularly in an era where CDS, quality measurement, health services research and risk-adjustment (as in accountable care organizations) are becoming commonplace, additional steps should be taken to improve problem list completeness. This study identified several success factors associated with problem list completeness through a review of problem list information and interviews with informatics leaders and EHR users. Practices most common among the top performing sites included: financial incentives, problem oriented charting, gap reporting, shared responsibility, links to billing codes, and organizational culture. Organizations seeking to improve their problem lists should consider adopting such practices, and additional study of them is needed to further establish their relative effectiveness.

SUMMARY TABLE.

What was already known on the topic:
  • Clinical problem lists are important for patient care, but are sometimes incomplete

  • Social and organization issues affect problem list completeness

What this study added to our knowledge:
  • Problem list completeness varied widely, ranging from 60.2% to 99.4% at the ten healthcare facilities we studied, with an average of 78.2%

  • Success factors associated with better problem list completeness include incentives, problem-oriented charted, gap reporting, shared responsibility, links to billing codes, and organizational culture

Highlights.

  • We studied problem list completeness for diabetes at ten sites using mixed methods.

  • Problem list completeness across the sites varied substantially from 60.2% to 99.4%

  • Six success factor for problem list completeness were identified from four top performing sites

  • All ten sites were surveyed about use of these success factors

Acknowledgments

We are grateful for the assistance of Anthony J Avery, MD of the University of Nottingham and Linas Simonaitis, MD, MS of the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, IN (and formerly of the Regenstrief Institute) who provided data for use in our study, as well as helpful input on our project.

Research reported in this publication was supported by the National Heart, Lung, And Blood Institute of the National Institutes of Health under Award Number R01HL122225.

Footnotes

Prior Presentation: Neither this manuscript, nor the data it contains, has been previously presented at any meetings, or in any other forum.

CONFLICTS OF INTEREST

The authors have no conflicts of interest or competing interests to declare.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

AUTHORS’ CONTRIBUTIONS

The contributions of the authors are:

Wright: conception and design; acquisition, analysis, and interpretation of data; drafting of the manuscript; statistical analysis; supervision

McCoy, Hickman, Borbolla, Bowes, Dixon, Dorr, Krall, Malholtra: acquisition, analysis, and interpretation of data; critical revision of the manuscript for important intellectual content

St. Hilaire: acquisition, analysis, and interpretation of data; critical revision of the manuscript for important intellectual content; administrative support

Bates and Sittig: acquisition, analysis, and interpretation of data; critical revision of the manuscript for important intellectual content; supervision

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Weed LL. Medical records that guide and teach. The New England journal of medicine. 1968 Mar 21;278(12):652–7. doi: 10.1056/NEJM196803212781204. concl. [DOI] [PubMed] [Google Scholar]
  • 2.Wright A, McGlinchey EA, Poon EG, Jenter CA, Bates DW, Simon SR. Ability to generate patient registries among practices with and without electronic health records. J Med Internet Res. 2009;11(3):e31. doi: 10.2196/jmir.1166. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Schmittdiel J, Bodenheimer T, Solomon NA, Gillies RR, Shortell SM. Brief report: The prevalence and use of chronic disease registries in physician organizations. A national survey. J Gen Intern Med. 2005 Sep;20(9):855–8. doi: 10.1111/j.1525-1497.2005.0171.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Grant RW, Cagliero E, Sullivan CM, et al. A controlled trial of population management: diabetes mellitus: putting evidence into practice (DM-PEP) Diabetes care. 2004 Oct;27(10):2299–305. doi: 10.2337/diacare.27.10.2299. [DOI] [PubMed] [Google Scholar]
  • 5.Wright A, Goldberg H, Hongsermeier T, Middleton B. A description and functional taxonomy of rule-based decision support content at a large integrated delivery network. Journal of the American Medical Informatics Association : JAMIA. 2007 Jul-Aug;14(4):489–96. doi: 10.1197/jamia.M2364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Wright A, Sittig DF, Ash JS, Sharma S, Pang JE, Middleton B. Clinical decision support capabilities of commercially-available clinical information systems. J Am Med Inform Assoc. 2009 Sep-Oct;16(5):637–44. doi: 10.1197/jamia.M3111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Eccles M, McColl E, Steen N, et al. Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. Bmj. 2002 Oct 26;325(7370):941. doi: 10.1136/bmj.325.7370.941. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Filippi A, Sabatini A, Badioli L, et al. Effects of an automated electronic reminder in changing the antiplatelet drug-prescribing behavior among Italian general practitioners in diabetic patients: an intervention trial. Diabetes care. 2003 May;26(5):1497–500. doi: 10.2337/diacare.26.5.1497. [DOI] [PubMed] [Google Scholar]
  • 9.Hicks LS, Sequist TD, Ayanian JZ, et al. Impact of computerized decision support on blood pressure management and control: a randomized controlled trial. J Gen Intern Med. 2008 Apr;23(4):429–41. doi: 10.1007/s11606-007-0403-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Sequist TD, Gandhi TK, Karson AS, et al. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. J Am Med Inform Assoc. 2005 Jul-Aug;12(4):431–7. doi: 10.1197/jamia.M1788. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.van Wyk JT, van Wijk MA, Sturkenboom MC, Mosseveld M, Moorman PW, van der Lei J. Electronic alerts versus on-demand decision support to improve dyslipidemia treatment: a cluster randomized controlled trial. Circulation. 2008 Jan 22;117(3):371–8. doi: 10.1161/CIRCULATIONAHA.107.697201. [DOI] [PubMed] [Google Scholar]
  • 12.Wright A, Feblowitz J, Maloney F, et al. Increasing Patient Engagement: Patients’ Responses to Viewing Problem Lists Online. Appl Clin Inform. 2014 doi: 10.4338/ACI-2014-07-RA-0057. Under review. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Hartung DM, Hunt J, Siemienczuk J, Miller H, Touchette DR. Clinical implications of an accurate problem list on heart failure treatment. J Gen Intern Med. 2005 Feb;20(2):143–7. doi: 10.1111/j.1525-1497.2005.40206.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Technology NIfSa. Test Procedure for §170.302 (c) Maintain up-to-date problem list 2010. [cited 2014 July 12]; Available from: http://healthcare.nist.gov/docs/170.302.c_problemlist_v1.1.pdf.
  • 15.Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. The New England journal of medicine. 2010 Aug 5;363(6):501–4. doi: 10.1056/NEJMp1006114. [DOI] [PubMed] [Google Scholar]
  • 16.Centers for M Medicaid Services HHS. Medicare and Medicaid programs; electronic health record incentive program--stage 2. Final rule. Federal register. 2012 Sep 4;77(171):53967–4162. [PubMed] [Google Scholar]
  • 17.Centers for Medicare and Medicaid Services. Medicare and Medicaid Programs. Electronic Health Record Incentive Program-Stage 3. 2015 [cited 2015 Apr 1]; Available from: https://www.federalregister.gov/articles/2015/03/30/2015-06685/medicare-and-medicaid-programs-electronic-health-record-incentive-program-stage-3.
  • 18.Wright A, Pang J, Feblowitz JC, et al. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record. Journal of the American Medical Informatics Association : JAMIA. 2011 Nov-Dec;18(6):859–67. doi: 10.1136/amiajnl-2011-000121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Szeto HC, Coleman RK, Gholami P, Hoffman BB, Goldstein MK. Accuracy of computerized outpatient diagnoses in a Veterans Affairs general medicine clinic. The American journal of managed care. 2002 Jan;8(1):37–43. [PubMed] [Google Scholar]
  • 20.Kaplan DM. Clear writing, clear thinking and the disappearing art of the problem list. Journal of hospital medicine : an official publication of the Society of Hospital Medicine. 2007 Jul;2(4):199–202. doi: 10.1002/jhm.242. [DOI] [PubMed] [Google Scholar]
  • 21.Wright A, Maloney FL, Feblowitz JC. Clinician attitudes toward and use of electronic problem lists: a thematic analysis. BMC medical informatics and decision making. 2011;11:36. doi: 10.1186/1472-6947-11-36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM. Research in action: using positive deviance to improve quality of health care. Implementation science : IS. 2009;4:25. doi: 10.1186/1748-5908-4-25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Marra AR, Guastelli LR, de Araujo CM, et al. Positive deviance: a new strategy for improving hand hygiene compliance. Infection control and hospital epidemiology : the official journal of the Society of Hospital Epidemiologists of America. 2010 Jan;31(1):12–20. doi: 10.1086/649224. [DOI] [PubMed] [Google Scholar]
  • 24.American Diabetes A. Diagnosis and classification of diabetes mellitus. Diabetes care. 2010 Jan;33(Suppl 1):S62–9. doi: 10.2337/dc10-S062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Hsiao C-J, Hing E. U.S. Department of Health and Human Services CfDCaP, National Center for Health Statistics, editor. Use and Characteristics of Electronic Health Record Systems Among Office-based Physician Practices: United States, 2001–2013. Hyattsville: National Center for Health Statistics; 2014. [Google Scholar]
  • 26.Sittig DF, Menon S, Thomass EJ, Singh H, Etchegaray J. Measuring Electronic Health Record-related Patient Safety Culture. Proceedings of Context Sensitive Health Informatics; Curitiba, Brazil. 2015. [Google Scholar]

RESOURCES