THE IMPORTANCE OF DIAGNOSTIC ERROR
One of the primary tasks of the GP is the diagnosis of patients presenting with new symptoms. This is the bedrock on which patient care is founded, particularly in health systems such as the UK NHS, where the GP acts as a ‘gatekeeper’ to specialist services. Diagnostic error has been defined as ‘a missed opportunity to make a timely or correct diagnosis based on the available evidence’ .1 Over half of litigation claims against GPs are for failure to diagnose. Significant delays have been reported in the diagnosis of common cancers and in conditions such as coeliac disease.2 Increasing use of standard pathways of care to improve speed of diagnosis, particularly in cancer, means that making a correct initial assessment of the patient is even more important.3 When we factor in the increasing demands on GPs’ time and workload due to, for example, increasing multimorbidity in older patients, and the multitude of common ‘alternative’ explanations for symptoms,4 it is clear that we need as much support as possible from technology to provide good-quality and safe patient care.5
DATA ANALYTICS AND THE LEARNING HEALTH SYSTEM
The world outside health care has changed dramatically over the past decade with the development of complex interconnected data systems, large volumes of data, and methods to produce knowledge from those data, known as the ‘Big Data revolution’. Data-mining methods based on Bayesian networks and alternative inference processes such as deep data mining are a common part of marketing, social media, and politics. However, these new approaches have been slow to find application in health care. A combination of large volumes of high-quality routine data, new analytical methods, and knowledge translation built into routine practice can create a ‘learning cycle’. The Learning Health System was first proposed in 2007 and was taken up by the US National Academy of Medicine the following year.6 An open ‘ecosystem’ that supports the creation, maintenance, and use of diagnostic knowledge at the point of care is now potentially achievable, but barriers remain.
Decision support systems (DSS), algorithm, and data-driven systems for providing individualised alerts and reminders have succeeded in improving care in areas such as prescribing and preventive medical interventions. However, they have so far failed to have much impact on diagnosis for the following main reasons:7
a lack of consideration of how to integrate the DSS in the cognitive workflow;
a lack of integration of the DSS with the electronic health record (EHR); and
a lack of diagnostic evidence on which to base the DSS.
SUPPORTING THE COGNITIVE TASK OF DIAGNOSIS
Given that significant advances have taken place in all three of these areas, now is perhaps the time to reappraise the role of DSS in diagnosis. At present, diagnosis is an essentially unaided task, relying on the clinician’s knowledge, memory, and a presumed ability to conduct an unbiased assessment of the patient’s problem. With advances in knowledge, we cannot expect clinicians to have all the necessary information ‘at their fingertips’, especially for other than the most common complaints. Memory is notoriously fallible, and unbiased assessment of the presenting symptoms and signs is not a characteristic of human judgement. Perhaps the primary problem is that clinicians do not recognise a need for diagnostic support, relying on their own acumen to deliver the correct diagnosis. This over-confidence may be due to practising in a ‘wicked learning environment’, where feedback is incomplete and delayed, and where confounders prevent attributing outcomes to actions.8 DSS need, therefore, not be something that can be ‘called on when needed’, but operate seamlessly in the background. On this premise, one can envisage two sorts of support: ‘automatic differential diagnosis generators’ operating at the very start of a consultation and ‘warning of a potential error’ alerts at the end. There is evidence that only the ‘early’ support is effective.9 Furthermore, there is a strong link between GPs’ initial diagnostic impressions and their subsequent diagnosis and management.10 A recent study has shown that an ‘early support’ differential diagnosis generator, embedded in an EHR system, can improve diagnosis by GPs consulting with actors in a high-fidelity simulation.11
THE ROLE OF THE ELECTRONIC HEALTH RECORD IN GENERATING DIAGNOSTIC EVIDENCE
Commercial differential diagnosis generators, such as Isabel and DXplain, and risk tools, also known as clinical prediction rules (CPRs), exist at present. If we are to integrate such tools seamlessly into the consultation there are two important technical prerequisites. The first is that the EHR system can trigger a ‘diagnosis interface’. The second is that the clinician enters the data in the EHR according to Lawrence Weed’s 1969 Episode of Care Model, which separates the presenting problem from the diagnosis.12 Without this, it is difficult to carry out data mining to develop the CPRs needed to generate lists of differential diagnoses. Only one of the currently available primary care EHR systems separates the ‘reason for encounter’ (RfE) from the ‘working diagnosis’, and, even then, this functionality is rarely used. The clear majority of consultations are coded (if at all) with a ‘problem’ as a heading. A ‘problem’ may be a reason for encounter or a presumed diagnosis, depending on the practice or whim of the GP. Structuring the record into the RfE, followed by subsequently evolving diagnostic labels as a patient is investigated or the symptoms change over time, avoids the problem of not knowing the index consultation, and having to discard up to 2 years of data, to be certain that a recorded symptom predated the diagnosis.13 This problem bedevils the creation of CPRs from routine data, and we may well be missing the most diagnostic symptoms that patients experience prior to diagnosis. As opposed to writing text, GPs also code very little in the way of symptoms and are more likely to code things that are already known to be associated with their leading diagnosis,14 resulting in unreliable and incomplete records.15 It is possible that natural language processing based around text data may overcome some of this, but the issues of RfE and index consultation are likely to remain.
In a striking example of the way technology is moving, the well-validated visual diagnostic tool for dermatology VisualDx (https://www.visualdx.com/) will soon incorporate Apple’s iOS11 Core-ML. This is a machine learning capability, built into the iPhone, designed to facilitate face recognition and recognition of foods for diet apps. Now it is being harnessed to provide differential diagnoses and explanations based on a picture of a skin lesion taken live in the surgery. We cannot yet do this for non-visual diagnosis by recording a consultation, but we can place greater emphasis on high-quality data in the EHR. Using the aforementioned ‘early support’ prototype increased coding of symptoms and signs ten-fold.11 CPRs can produce individualised, real-time decision support that can be used to drive a DSS, aiming to improve patient outcomes and produce data for a learning system, which in turn will improve the CPRs. However, we need to be a lot more receptive to adopting new technologies in practice, even those that challenge our usual way of doing things, and to studying their effect on professional performance and patient experience. Diagnosis will always be the key challenge for GPs, but, much like the adoption of digital imaging by radiologists, and robots by surgeons, we are about to see significant changes in the way the EHR is used in the GP consultation.
Provenance
Commissioned; not externally peer reviewed.
REFERENCES
- 1.Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. BMJ Qual Saf. 2014;23(9):727–731. doi: 10.1136/bmjqs-2013-002627. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Kostopoulou O, Delaney BC, Munro CW. Diagnostic difficulty and error in primary care — a systematic review. Fam Pract. 2008;25(6):400–413. doi: 10.1093/fampra/cmn071. [DOI] [PubMed] [Google Scholar]
- 3.Green T, Atkin K, Macleod U. Cancer detection in primary care: insights from general practitioners. Br J Cancer. 2015;112(Suppl 1):S41–S49. doi: 10.1038/bjc.2015.41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Sirota M, Kostopoulou O, Round T, Samaranayaka S. Prevalence and alternative explanations influence cancer diagnosis: an experimental study with physicians. Health Psychol. 2017;36(5):477–485. doi: 10.1037/hea0000461. [DOI] [PubMed] [Google Scholar]
- 5.Mounce LTA, Price S, Valderas JM, Hamilton W. Comorbid conditions delay diagnosis of colorectal cancer: a cohort study using electronic primary care records. Br J Cancer. 2017;116(12):1536–1543. doi: 10.1038/bjc.2017.127. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Friedman CP, Wong AK, Blumenthal D. Achieving a nationwide learning health system. Sci Transl Med. 2010;2(57) doi: 10.1126/scitranslmed.3001456. 57cm29. [DOI] [PubMed] [Google Scholar]
- 7.Nurek M, Kostopoulou O, Delaney BC, Esmail A. Reducing diagnostic errors in primary care. A systematic meta-review of computerized diagnostic decision support systems by the LINNEAUS collaboration on patient safety in primary care. Eur J Gen Pract. 2015;21(Suppl):8–13. doi: 10.3109/13814788.2015.1043123. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Hogarth RM. Educating intuition. Chicago, IL: University of Chicago Press; 2001. [Google Scholar]
- 9.Kostopoulou O, Rosen A, Round T, et al. Early diagnostic suggestions improve accuracy of GPs: a randomised controlled trial using computer-simulated patients. Br J Gen Pract. 2015 doi: 10.3399/bjgp15X683161. https://doi.org/10.3399/bjgp15X683161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Kostopoulou O, Sirota M, Round T, et al. The role of physicians’ first impressions in the diagnosis of possible cancers without alarm symptoms. Med Decis Making. 2016;37(1):9–16. doi: 10.1177/0272989X16644563. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Kostopoulou O, Porat T, Corrigan D, et al. Diagnostic accuracy of GPs when using an early-intervention decision support system: a high-fidelity simulation. Br J Gen Pract. 2017 doi: 10.3399/bjgp16X688417. https://doi.org/10.3399/bjgp16X688417. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Soler JK, Corrigan D, Kazienko P, et al. Evidence-based rules from family practice to inform family practice; the learning healthcare system case study on urinary tract infections. BMC Fam Pract. 2015;16(1):63. doi: 10.1186/s12875-015-0271-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Hamilton W. The CAPER studies: five case-control studies aimed at identifying and quantifying the risk of cancer in symptomatic primary care patients. Br J Cancer. 2009;101(Suppl):S80–S86. doi: 10.1038/sj.bjc.6605396. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Price SJ, Stapley SA, Shephard E, et al. Is omission of free text records a possible source of data loss and bias in Clinical Practice Research Datalink studies? A case-control study. BMJ Open. 2016;6(5):e011664. doi: 10.1136/bmjopen-2016-011664. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.van der Bij S, Khan N, Ten Veen P, et al. Improving the quality of EHR recording in primary care: a data quality feedback tool. J Am Med Inform Assoc. 2017;24(1):81–87. doi: 10.1093/jamia/ocw054. [DOI] [PMC free article] [PubMed] [Google Scholar]