Skip to main content
Journal of Feline Medicine and Surgery logoLink to Journal of Feline Medicine and Surgery
. 2016 May 3;18(5):418–426. doi: 10.1177/1098612X16643251

Case-based clinical reasoning in feline medicine

3: Use of heuristics and illness scripts

Martin L Whitehead 1, Paul J Canfield 2,, Robert Johnson 3, Carolyn R O’Brien 4, Richard Malik 5
PMCID: PMC11132198  PMID: 27143043

Abstract

Aim:

This is Article 3 of a three-part series on clinical reasoning that encourages practitioners to explore and understand how they think and make case-based decisions. It is hoped that, in the process, they will learn to trust their intuition but, at the same time, put in place safeguards to diminish the impact of bias and misguided logic on their diagnostic decision-making.

Series outline:

Article 1, published in the January 2016 issue of JFMS, discussed the relative merits and shortcomings of System 1 thinking (immediate and unconscious) and System 2 thinking (effortful and analytical). In Article 2, published in the March 2016 issue, ways of managing cognitive error, particularly the negative impact of bias, in making a diagnosis were examined. This final article explores the use of heuristics (mental short cuts) and illness scripts in diagnostic reasoning.

What are heuristics and how do they play a role in diagnostic reasoning?

You may well be thinking, ‘Why do I need to know anything about heuristics?’ Well, take it from us, you already use heuristics! Heuristics are used by everyone, every day, sometimes knowingly but often unknowingly.

graphic file with name 10.1177_1098612X16643251-img1.jpg

We often use heuristics unconsciously in circumstances where a decision needs to be made relatively quickly, whether it be ‘how do I know if this new car (or associate) is right for the practice?’ or ‘how do I manage this fractious cat?’. Sometimes heuristics are more deliberate and consciously used when we have plenty of time to make a clinical decision. Heuristics are ‘rules of thumb’ that prove useful for diagnosis more times than they lead to error. Sometimes they are referred to as ‘fast and frugal’ mental short cuts. 1

Since heuristics appear to be more commonly used in System 1 thinking and less commonly in trained System 2 thinking, they have been maligned as being cognitively undemanding. Moreover, some heuristics are cognitive biases and have the potential to mislead under certain circumstances.

So, if there is some negativity about the use of heuristics, why is it that they survive in decision- making? The answer may well stem from the fact that they are part of an ‘ancient’ form of thinking and are necessary for survival in an adaptive world. 2 That is possibly still true today for many of us. What is irrefutable, however, is that they save time, and the brain has a natural tendency, especially when stressed or tired, to incorporate them into System 1 thinking in order to rapidly reach an approximate or ‘best guess’ answer. Moreover, they can be powerful when used sensibly and knowingly as general- purpose strategies in trained System 2 thinking, with the proviso that any negative bias involved is acknowledged.

The medical profession has recognised, with varying views on acceptance, the use of heuristics along with illness scripts in clinical reasoning.3,4

graphic file with name 10.1177_1098612X16643251-img2.jpg

graphic file with name 10.1177_1098612X16643251-img3.jpg

There are many types of heuristics, and they fall into two distinct categories (Table 1). The availability, representativeness and familiarity/recognition heuristics are cognitive biases that are an inherent part of our reasoning processes (especially System 1 thinking); depending on circumstances, they result in correct decisions more often than incorrect decisions. Veterinarians are likely to use this category of heuristics without recognising them when engaging in System 1 thinking. These heuristics will draw on long-term memories concerning patterns of features or even complete generic illness scripts to help decide on a diagnosis. They come into play when simple pattern recognition lets you down because the case data derived from history, signalment, clinical signs and physical findings could match any of several possible diseases. In other words, heuristics in System 1 thinking help when having to make a choice among diseases that have overlapping features.

Table 1.

Common heuristics that may be used in clinical decision making

Definition Example
CATEGORY 1 Heuristics that are cognitive biases and which act as inherent parts of clinical reasoning, primarily in System 1 thinking. They particularly come into play when choices have to be made between several diseases that have overlapping features. They can give rise both to correct and incorrect diagnostic decisions, but the former should outweigh the latter
Availability heuristic Estimating the frequencies of events (ie, the probability of occurrence of an illness) on the basis of how easy it is to recall them from one’s memory (past experiences) – Care has to be taken to avoid overestimation of the commonness of a disease just because it is easy to recall or treat due to some other factor(s) • Good use: Progressive muscle weakness resulting in ascending paralysis in cats seen in your practice on the east coast of Australia is most commonly due to Ixodes holocyclus (Australian paralysis tick); especially when the cat looks anxious, vomits or has difficulty breathing and/or a change in its meow. Because you see this so commonly, this is the first thing that springs to mind when the next case of ascending paralysis in a cat is presented. But confirm by searching for an engorged tick or tick bite ‘crater’
• Bad use (misleading): There has been much recent publicity about leptospirosis as a zoonotic disease and the risk companion animals may pose to their owners. The next time you see an icteric cat, you immediately think of leptospirosis as a cause of liver disease, even though domestic cats domiciled in the city (or anywhere for that matter) are most unlikely to have symptomatic leptospirosis
Representativeness heuristic This is based on the acceptance of a high probability of commonality between objects of similar appearance – The assumption is that the individual feline patient will always present with (at least some) characteristic clinical signs and other features derived from history, signalment and physical findings reported for the stereotype of the disease (ie, representative of the disease) • Good use: A young adult cat presents with signs of cystitis, but with several possible causes. Idiopathic cystitis is considered on the basis of a combination of case features including that the cat is an overweight indoor cat in a multi-cat household, is very anxious, and hides under the bed whenever visitors come. We would possibly not have thought of idiopathic cystitis if the cat had been slim, confident, with free access to outdoors and was the only cat in the household. This is because the former cat is very representative of the ‘class’ of idiopathic cystitis cats. The latter cat might well have idiopathic cystitis (indeed, idiopathic cystitis is probably the most likely cause of the cystitis signs), but is not representative – in a vet’s mind – of the class of idiopathic cystitis cats
• Bad use (misleading): A 3-year-old cat presents in late spring with appendicular weakness. The weakness seems to improve with rest. Because it is tick season and the clinical presentation is fairly typical, you think tick paralysis is the most likely diagnostic possibility (although you cannot find an engorged I holocyclus tick, or even a tick bite crater). You administer tick antiserum and monitor the cat. Your colleague takes over the case the next day, considers myasthenia gravis as a possibility because a tick was not found (and they checked themselves) and administers edrophonium. The cat shows a positive response (improved muscle strength after intravenous injection) and the diagnosis of myasthenia is confirmed by demonstrating antibodies against the acetylcholine receptor using a specialist neuromuscular laboratory overseas
Familiarity/recognition heuristic Using this heuristic, greater emphasis is placed on features or evidence of disease with which you are more familiar (ie, more easily recognised) – Potentially certain clinical signs or results of tests have a greater impact on your thinking, which could be a good or a bad thing depending on what you ignore – While the familiarity and recognition heuristics are closely related, the latter can be regarded as a more ‘extreme’ form, and is more prone to error • Good use: You are presented with a young cat that is displaying a cough, fever, loss of appetite, weight loss, reduced breath sounds on auscultation and a bluish tinge to the gums. The fever in combination with respiratory signs (as a pattern or even generic illness script) immediately makes you think of the possibility of pyothorax
• Bad use (misleading): This cat also has heart sounds that are displaced caudally, and a single large superficial (right) cervical lymph node. But these features for some reason are given less emphasis, and initially ignored. Further investigations demonstrate that the cat has mediastinal lymphoma and a large lymphocyte-rich pleural effusion. With the benefit of hindsight the significance of cardiac displacement is now obvious
CATEGORY 2 These heuristics are not cognitive biases as they act as general-purpose strategies for approaching complex or difficult diagnoses. They tend to operate in System 2 thinking, except for the anchoring heuristic, which also often acts in System 1 settings
Anchoring and adjustment heuristic The clinician starts with an implicitly suggested reference point (the ‘anchor’) and makes adjustments to it to reach a decision about diagnosis – The anchor may mislead if it is affected by bias or if a piece of information (a premise) was plainly wrong from the start (‘urban myth’) – This heuristic is used in both System 1 and System 2 thinking • Good use: A cat is presented with dyspnoea. You begin your investigation by employing the anchor that dyspnoea in cats is most commonly due to cardiorespiratory disease, and often is caused by a pleural effusion. So diagnostic testing will likely involve ultrasound of the chest, echocardiography and thoracic radiography (plus thoracocentesis if fluid is present). In this instance, ultrasonography reveals a cranial mediastinal mass and a pleural effusion. With the finding of the mass you ‘adjust’ from your anchor and assume that this mass lesion is more important diagnostically. A chest tap produces red-tinged fluid, which cytologically shows a uniform population of large lymphoblasts, consistent with mediastinal lymphoma
• Bad use (misleading): You are presented with a young cat with dyspnoea and subtle stertor. You begin your investigation by employing the anchor that overt dyspnoea in cats is most commonly due to cardiorespiratory disease, and often is caused by a pleural effusion. So diagnostic testing will likely involve ultrasound of the chest, echocardiography and thoracic radiography. All these investigations are unremarkable. You then read that stertor is often a sign of nasopharyngeal disease. Posterior rhinoscopy demonstrates a mass, which can be dislodged by vigorous antegrade nasal flushing. Histology of the tissue specimen demonstrates large cell lymphoma. In this instance, ‘dyspnoea is commonly due to disease of the chest’ proved to be a false anchor. Dyspnoea plus stertor is suggestive of nasopharyngeal disease and this would have been a better anchor, and would not have misled the investigation (although no harm was done, some of the client’s money might have been better spent)
Means-end and hill-climbing heuristics In the means-ends heuristic a large reasoning problem is divided into smaller ‘subproblems’, in the knowledge (or hope!) that solving all of the smaller problems will result in solving the larger problem. The hillclimbing heuristic differs in that there is greater uncertainty about whether the choice of a smaller subproblem might start you on your way to solving the larger problem – These heuristics are commonly used in System 2 thinking • Good use (means-end heuristic): A cat is presented with a complex, likely multiorgan, disease just before you are about to finish after a long day at work. Neither the history nor the physical examination provide any useful clues as to the cause of the cat’s illness. You assess that part of the problem is severe dehydration with electrolyte and acid/base derangements. In-house blood tests demonstrate metabolic alkalosis and hypokalaemia. You decide initially to improve hydration and correct these secondary electrolyte changes. The combination of metabolic alkalosis and hypokalaemia is a ‘pattern’ that is usually associated with loss of gastric acid and potassium through vomiting (often with a fixed obstruction at the level of the pylorus or duodenum), with further loss of potassium into the urine. You plan on performing abdominal ultrasound and contrast radiology. However, the investigation can wait until tomorrow, when the cat’s status is improved sufficiently to cope with imaging studies
• Bad use (misleading) (hill-climbing heuristic): A cat is presented with a complex, likely multiorgan, disease just before you are about to finish after a long day at work. You haven’t a clue what is going on, but notice that its fur is matted under the chin and you decide to clip it. You find what looks like a healing cat-fight abscess which has burst and is draining. You collect blood for FIV and FeLV testing, give the cat an injection of amoxicillin clavulanate, offer it food and water, and head home. Although you have found two problems, and treated one of them effectively, it is not likely to be the cause of the cat’s overall status (unless there is another abscess you have missed). You have climbed two small hills: the cat might have been in fights before, may have become infected with the FIV virus, and this may have something to do with the current problem, but you have a long way to go. Hill climbing usually pays off if you do enough tests and link the answers correctly – but be prepared, sometimes it doesn’t!
Progress-monitoring heuristic This is reflection on how you are managing a case – This heuristic probably kicks in when hill climbing is just not getting you far enough towards a final diagnosis, so you decide to use another strategy, or try going in a different direction – The most timeintensive of the category 2 heuristics used as reasoning strategies in System 2 thinking • Good use: Reflection on the results from an anaemic cat suggests that your initial thoughts are not going to explain the clinical picture. The anaemia is non-regenerative, this has not changed over 3 days (so it wasn’t just pre-regenerative), the cat is FIV and FeLV negative, the PCR for Mycoplasma haemofelis is negative and the ferritin level is normal. So you change tack, and embrace the possibility that the cause of anaemia is primary bone marrow disease and elect to do a bone marrow aspirate for cytological assessment. (This example also fits well with an anchoring and adjustment heuristic – ie, the reasoning strategy heuristics are not mutually exclusive)
• Bad use (misleading): You are not getting far investigating a cat with anaemia and a colleague suggests that you focus more on the nasal philtrum ulceration that is also evident. You take a biopsy, but before the results come back the animal dies due to intra-abdominal haemorrhage. The biopsy of the nasal lesion is consistent with photosensitization. Histological examination of a liver biopsy collected at necropsy demonstrates hepatic amyloidosis. (This example probably starts off as hill climbing and turns into progress monitoring)

In contrast, anchoring and adjustment, means-end, hill-climbing and progress- monitoring heuristics are not cognitive biases, but general-purpose strategies for approaching complex problems, including but not restricted to reasoning processes. These are generally used in System 2 thinking for difficult diagnoses and veterinarians will be aware of their use, if not their names!

Availability heuristic

The availability heuristic relies on estimating the frequencies of events on the basis of how easy it is to recall them from one’s memory of past experiences. 5 It is probably the origin of the old adage ‘common things occur commonly’.

The availability heuristic can work well as long as certain variables are controlled; for example, if types of disease remain constant in the local population or when the frequency of occurrence does not differ between geographic regions. But what, say, if a UK veterinarian undertaking a locum placement in Australia is presented with a cat with dilated pupils, muscle weakness and laboured breathing. Snake bite as a possible cause may not be part of the availability heuristic. Hopefully a colleague or an astute veterinary nurse may consider this diagnostic possibility, because for them it would be a common occurrence and easy to recall.

As this example suggests, the availability heuristic can influence our decisions when deciding on a diagnosis, or even drawing up a list of diagnostic possibilities, based on a perceived pattern of signs or more complete case data. Note, however, that as discussed in Article 2, 6 there is the risk of availability bias leading to an incorrect answer if a memory is recalled not on the basis of it being a common occurrence but due to its unusualness or some other circumstances that left a strong impression.

Representative heuristic

The representativeness heuristic relies on the assumption that there is a high probability of commonality between objects of similar appearance – in other words, a ‘what does this remind me of?’ approach. 5 In diagnostic reasoning, this is a useful heuristic (ie, gives the correct answer more often than not) because, while the presentations of many diseases can vary substantially between patients or over time in one patient, most presentations of any one disease share at least some features derived from a combination of history, signalment, clinical signs or physical findings.

The clinical signs of erythema, crusting skin with early bleeding and ulceration at the tips of the ears in a white cat might be a cogent example of the use of pattern recognition in arriving at a presumptive diagnosis of actinic keratosis progressing to squamous cell carcinoma (especially in sunny Australia), but it is not an example of the use of the representative heuristic. However, if some of the case data did not classically fit with actinic keratosis (eg, the cat rarely went outside) and other case data raised the possibility of other diseases (eg, the cat had been bothered by flies recently, or had a habit of rubbing its ears against an armchair that had just been sent off to be ‘cleaned’), then the representative heuristic will come into play in deciding between the alternatives. Thus, the representativeness heuristic draws on patterns of features and sometimes whole illness scripts that are in the clinician’s long-term memory to choose between diseases with overlapping clinical features.

Familiarity/recognition heuristic

The third heuristic of note in System 1 thinking is the familiarity heuristic (familiar things are regarded as being more ‘important’ or having more ‘value’ than unfamiliar things, so we place more emphasis on diseases or clinical signs that we are familiar with). The recognition heuristic can be regarded as a more restricted, more ‘extreme’ version of the familiarity heuristic (recognisable things are more ‘important’ or have more ‘value’ than unrecognisable things, so we concentrate on the recognised more than the unrecognised; may also be referred to as ‘take the best, ignore the rest’).

The familiarity/recognition heuristic allows focus on key features of a condition – a presumptive diagnosis can be made more quickly, so that the clinician can rapidly move forward with confirmatory testing. Apparently spurious information is ignored, which is both the strength and weakness of this heuristic. Certainly this short cut can work well, as long as something really important hasn’t been ignored that might have suggested a viable alternative diagnosis or a pertinent comorbidity (concurrent disease).

Anchoring and adjustment heuristic

There are versions of the anchoring and adjustment heuristic in System 1 thinking (where this heuristic influences the way people intuitively assess probabilities 7 ) and in System 2 thinking as a reasoning strategy (ie, as one of our second category of heuristics). As a reasoning strategy, veterinarians start with an implicitly suggested reference point (‘anchor’) and make adjustments to it to reach the diagnosis. The anchor itself may arise from System 1 or System 2 thinking, and may have the capacity to lead to correct or incorrect decisions (‘false anchors’, see Figure 1 and also box on page 422)

Figure 1.

Figure 1

Is this a case of missing an obvious anchor – or is the owner superunobservant? Whatever the reason, starting with the wrong anchor can lead to disaster in diagnosis

By consciously utilising the anchor in System 2 thinking, it can be assessed for its soundness and ‘adjustments’ introduced sequentially and logically. Thus, the anchoring and adjustment heuristic is often an example of using Systems 1 and 2 thinking in tandem (see Article 2), which can save much time in clinical reasoning.

graphic file with name 10.1177_1098612X16643251-img4.jpg

Figure 2.

Figure 2

Lateral radiograph of the left thoracic limb of an old Abyssinian cat (a). Note the multiple punched out osteolytic lesions. The presumptive diagnosis of the author who was sent the radiograph was multiple myeloma, even though no further lesions could be detected in full body radiographs of the cat. On examination of the actual patient (not its radiographs!), there was clear evidence of suppurative inflammation, with pus draining from an open wound (b). The cat improved markedly after starting amoxicillin clavulanate treatment, with healing of the lytic bone lesions (c). Images courtesy of Emma Hughes

Means-end and hill-climbing heuristics

In the means-ends heuristic a complex reasoning problem is divided into smaller ‘subproblems’ in the knowledge, or hope, that solving all of the smaller problems will result in solving the larger problem. This reasoning strategy is commonly used in the analytical, problem-oriented approach by veterinarians. In fact a great deal of our System 2 thinking for diagnosis utilises this heuristic or its close cousin, the hill-climbing heuristic.

The hill-climbing heuristic differs in that there is greater uncertainty about whether the choice of a smaller subproblem might start you on your way to solving the larger problem; in other words you do not know whether or not solving that problem will actually help towards solving the larger problem, but you are willing to give it a go. We’re sure some of you will recognise deliberately using this strategy to help reach a diagnosis. It could be argued that the hill-climbing heuristic, and other reasoning strategy heuristics used in System 2 thinking, form the central basis for problem-oriented medicine: the resolution of numerous disparate problems into a smaller number of key problems being the cornerstone of achieving a definitive final diagnosis (or diagnoses when multiple disease processes coexist in the same patient).

These heuristics are used as reasoning strategies because they allow the problem of diagnosis to be redefined as something that is simpler to achieve. 8 With hill climbing you realise it is a long way to the top for that elusive diagnosis, but at least you are going up the hill by breaking the climb into segments; thus progress is being made – or you think it is! For example, a cat presented with signs of central nervous system (CNS) disease may require a complex series of investigations (neurological examination, blood tests, cross-sectional imaging, serological tests, cerebrospinal fluid [CSF] collection and analysis, etc) to determine the cause of the problem. Your practice policy may be that such cases will be referred to a neurologist. Thus even by making a diagnosis of ‘CNS disease’, you are achieving something and ‘moving up the hill’. Perhaps you could instead elect that, in all such cases, you will send blood off for non-invasive testing to rule in/out some treatable neurological diseases (lead poisoning, thiamine deficiency, cryptococcosis) and then offer referral for advanced imaging and CSF collection if there is no response to a 10 day high-dosage course of clindamycin (to cover the two further treatable possibilities of ascending middle ear infections and toxoplasmosis). You may not reach a definitive diagnosis, but you are moving forward (definitely hill climbing!). On the other hand, ‘hill climbing’ could be misleading if your subgoal does not get you closer to the real diagnosis.

Sometimes hill climbing may just be a form of displacement activity (Figure 3), risking delaying a definitive test or intervention that might provide a tissue or microbiological diagnosis, or a surgical option for therapy. For example, many people resort to testing for feline immunodeficiency virus (FIV) in a sick cat when they are ‘diagnostically destitute’, even though it is generally not possible to determine whether the cat’s FIV status has any impact on the current disease status. (This, interestingly, is in contrast to the situation with HIV/AIDS, where CD4 counts and viral load measurement can be very informative.)

Figure 3.

Figure 3

When all is chaos and confusion, it can be tempting to find something ‘worthwhile’ to do. The hillclimbing heuristic employed in a bad way!

Progress-monitoring heuristic

Is there a reasoning strategy heuristic that comes into play when you are not making satisfactory progress in solving a case? There is, and it is the so-called progress-monitoring heuristic. 9 This heuristic probably kicks in when hill climbing is just not giving you enough advancement, so you decide to use another strategy or to change direction; for example, you consider involvement of another body system or a new infectious agent, you try a different imaging modality, or look at the chest instead of the abdomen (even though signs initially pointed to abdominal involvement).

This can be a very useful strategy when you are using an analytical problem-oriented approach as it provides a ‘reality check’ for progress. However, beware the impact of past experiences and your current emotional state – they can misdirect you as you assess the state of play (especially if owners are applying pressure for a rapid diagnosis!). The danger is that you may select a new strategy too early and not base it on an objective evaluation of the current strategy in use.

Progress monitoring can also come into play when a lack of response to treatment makes you question the presumptive diagnosis.

graphic file with name 10.1177_1098612X16643251-img5.jpg

How do heuristics relate to patterns, illness scripts, intuition and diagnostic algorithms?

So we come to see that it is System 1’s unconscious use of some heuristics – drawing on patterns of clinical features and more complete illness scripts stored in long-term memory – that underlies much of what is described as ‘intuitive’ diagnosis. Intuition is conventionally defined as ‘the ability to acquire knowledge or understanding without the conscious use of reason’. This definition may seem perplexing – how do you know something, without knowing how you know it? It is for this reason that the use of intuition in the field of medical reasoning has long been looked on in a rather derogatory fashion by some physicians.

However, it is now becoming accepted that experienced and expert clinicians improve their capacity for diagnosis through intuition by drawing on information held in long-term memory in a meaningful and often unconscious way. 10 Information in long-term memory can come from lectures, published texts, personal experience, conversations with mentors/colleagues, conferences and seminars. We can perhaps now better understand why experts often find it difficult to explain how they came to a diagnosis, particularly if this involves seemingly astonishing cognitive leaps. In essence, it is down to them having numerous past experiences with that type of disease and cataloguing these effectively in memory, so that they can be recalled quickly through certain triggers, such as specific information or key clinical or historical findings. There is no doubt that certain people have a flair for this type of diagnostic reasoning, and it helps greatly to have a retentive memory – and to think like a detective! The great Sherlock Holmes accepted the importance of intuition and the fact that it appeared to happen without conscious thought.

graphic file with name 10.1177_1098612X16643251-img6.jpg

Because intuitive diagnoses commonly involve heuristics as cognitive biases, it is important consciously to put some safeguards in place through trained System 2 thinking. These were discussed in Article 2, and are summarised on the right.

graphic file with name 10.1177_1098612X16643251-img7.jpg

Moreover, after some practice and reflection on these safeguards, you might be able to better explain to a junior colleague or recent graduate (or even yourself!) how you came to a certain diagnosis. Hopefully, your junior colleague will quickly realise that you are someone who has seen lots of cases, learned from your mistakes as well as your triumphs, and also from colleagues/mentors and lifelong learning strategies involving journals, seminars, meetings and case discussions. Following on from this, it may even be possible to deliberately utilise generic illness scripts stored as long-term memories employed in intuitive diagnoses to teach ‘the art’ of clinical diagnosis.

So-called ‘script theory’, which has been around since the 1970s, was used in the lexicon of cognitive psychology to refer to the way in which people understood real-world events, usually in an effortless way. 11 The medical fraternity saw an opportunity to adapt this concept to case-based clinical reasoning and give it some credibility for teaching deliberately constructed generic illness scripts to undergraduates. Schmidt et al divided script-based intuitive clinical reasoning into generic and instance illness scripts (see boxes). 12

graphic file with name 10.1177_1098612X16643251-img8.jpg

graphic file with name 10.1177_1098612X16643251-img9.jpg

Figure 4.

Figure 4

Cat with mosquito-bite hypersensitivity. Historically, lesions such as these were often attributed to autoimmune disease or the eosinophilic grauloma spectrum of conditions. But following two seminal publications,14,15 the generic illness script made this a commonplace diagnosis in Australia and other places with warm humid conditions and an abundance of midges and mosquitoes. Whether a black hair coat is part of the generic illness script or an urban myth is unresolved! Courtesy of Ildiko Plaganyi

Because of the complexity of many veterinary cases, common usage of part or complete illness scripts in System 1 thinking speeds up the diagnostic process by ‘separating the wheat from the chaff’. The question is, how can you be sure that it was the (spurious) chaff that you discarded, rather than the (critical) wheat? To safeguard this process, it is vital to link illness scripts with working memory via effortful System 2 thinking. This will make illness scripts and the recognition of patterns more effective cognitive devices by ensuring they are less prone to the omission of data germane to the case.

graphic file with name 10.1177_1098612X16643251-img10.jpg

Another way to try to avoid bias and omission of pertinent data is to employ diagnostic algorithms. A medical algorithm is any computation, formula, statistical survey, nomogram (graphical representation of relationships) or look-up table which might be useful in diagnosing or managing a disease. Diagnostic or clinical algorithms, especially in the veterinary arena, commonly take the form of decision trees. They rely on preliminary clinical signs or physical findings, or groups of these (perhaps some derived from deliberate constructs of generic illness scripts) to further guide investigation in order to detect, support or refute a diagnosis. 16 In this way, diagnostic algorithms follow the problem-oriented, forward-thinking approach to diagnosis. Heuristics superficially appear to have little role in the construction of diagnostic algorithms, although hill-climbing and means-end heuristics are often used as general-purpose reasoning strategies in algorithms.

Diagnostic algorithms, by their very nature, are designed more for complex cases requiring comprehensive investigations. When money is no object, use of such algorithms usually leads to a definitive answer. Many in the medical sphere include probabilities as a measure of uncertainty about a specific diagnosis. However, it could be argued that algorithms are only as good as the premises or hypotheses upon which they are based. If one of the premises is flawed, then it is possible that incorrect diagnoses will be reached. Since knowledge is never certain, and basic premises or hypotheses are continually being challenged, algorithms require constant updating. Therefore, it is prudent always to check when the algorithm was developed!

Veterinary diagnostic algorithms tend to deal in black and white answers, since evidence-based probabilities for diagnostic tests have rarely been developed. So, by their very nature, they tend to ignore the ‘shades of grey’ that often exist for biological systems responding to perturbation (ie, disease). In other words, while algorithms strive for objectivity and accuracy, they do not allow for open-mindedness and scepticism. This is the shortcoming of algorithms, as these four cornerstones of scientific enquiry work best in concert. Nonetheless, while not suggesting that clinical algorithms should be used for all complex cases or difficult diagnoses, we do urge you to use them when appropriate. Especially when they have been developed by expert clinicians using the application of evidence-based veterinary medicine tempered by their own clinical experiences, because this can reduce the risk of error and lead to success rates comparable with those achieved by intuitive veterinary experts. 17

graphic file with name 10.1177_1098612X16643251-img11.jpg

Final comments and conclusions

In this article we have suggested that heuristics are powerful tools in medical reasoning, especially for the experienced clinician. Basically, they have roles in reasoning in both System 1 and System 2 thinking: in the former by drawing on long-term memories, including patterns and theoretical illness scripts, to assist in intuitive diagnosis; and in the latter by assisting in reasoning strategies as well as drawing on long-term memories. It is important to accept that heuristics used in System 1 thinking can sometimes mislead, and that some effortful System 2 thinking is required to put checks and balances in place. Even expert clinicians check to ensure that they have sufficient evidence for their definitive diagnosis and have thought of (and often excluded) possible alternatives. Moreover, they commonly regard the ultimate test (check) of a diagnosis as being the response to medical or surgical therapy.

If we have any summary advice to impart, it would be to learn to trust your intuition but at the same time put in place safeguards to diminish the impact of bias and misguided logic, whatever their origin. Accept that your brain can mislead you! We have an important admission to make: we still get cases wrong and we still get confused about how we think. Does that alarm us? Only when we are not cognisant of making those mistakes. The starting point to being the best clinician you can stems from that awareness.

graphic file with name 10.1177_1098612X16643251-img12.jpg

The thought processes in diagnosis are still poorly understood. For that reason, it seems hard work at times. But accepting the uncertainty of the process can mean that you can approach diagnosing difficult cases with a sense of purpose – and at least with some optimism. Remember Sherlock Holmes referring to the ‘thrill of the chase’ when you are next mulling over a difficult case – and ensure that you use all your faculties to sniff out the answer!

Supplemental Material

Click here for Supplementary Video

Richard Malik interviews Paul Canfield, lead author of the JFMS clinical reasoning series. To access this video, please visit: https://player.vimeo.com/video/150981038

Acknowledgments

Richard Malik is supported by the Valentine Charlton Bequest administered by the Centre for Veterinary Education of the University of Sydney.

Footnotes

Funding: The authors received no financial support for the research, authorship and/or publication of this article.

The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

Contributor Information

Martin L Whitehead, Chipping Norton Veterinary Hospital, Banbury Road, Chipping Norton, Oxon, OX7 5SY, UK.

Paul J Canfield, Faculty of Veterinary Science, B14, University of Sydney, NSW 2006, Australia.

Robert Johnson, South Penrith Veterinary Clinic, 126 Stafford Street, Penrith, NSW 2750, Australia.

Carolyn R O’Brien, Faculty of Veterinary Science, The University of Melbourne, Parkville, VIC 3152, Australia.

Richard Malik, Centre for Veterinary Education, B22, University of Sydney, NSW 2006, Australia.

References

  • 1. Gigerenzer G, Goldstein DG. Reasoning the fast and frugal way: models of bounded rationality. Psychol Rev 1996; 103: 650–669. [DOI] [PubMed] [Google Scholar]
  • 2. Mithen SJ. Thoughtful foragers – a study of prehistoric decision making. Cambridge: Cambridge University Press, 1990, pp 1–289. [Google Scholar]
  • 3. Eva KW, Norman GR. Heuristics and biases – a biased perspective on clinical reasoning. Med Educ 2005; 39: 870–872. [DOI] [PubMed] [Google Scholar]
  • 4. McDonald CJ. Medical heuristics: the silent adjudicators of clinical practice. Ann Intern Med 1996; 124: 56–62. [DOI] [PubMed] [Google Scholar]
  • 5. Harvey N. Use of heuristics: insights from forecasting research. Think Reasoning 2007; 13: 5–24. [Google Scholar]
  • 6. Canfield PJ, Whitehead ML, Johnson R, et al. Case-based clinical reasoning in feline medicine. 2: Managing cognitive error. J Feline Med Surg 2016; 18: 240–247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Epley N, Gilovich T. Putting adjustment back into the anchoring and adjustment heuristic: differential processing of self-generated and experimenter-provided anchors. Psychol Sci 2001; 12: 391–396. [DOI] [PubMed] [Google Scholar]
  • 8. Eysenck MW, Keane MT. Thinking and reasoning. In: Cognitive psychology – a student’s handbook. East Sussex: Psychology Press, 2010, pp 470–473. [Google Scholar]
  • 9. MacGregor JN, Ormerod TC, Chronicle EP. Information processing and insight: A process model of performance on the nine-dot and related problems. J Exp Psychol Learn 2001; 27: 176–201. [PubMed] [Google Scholar]
  • 10. Greenhalgh T. Intuition and evidence – uneasy bedfellows. Brit J Gen Pract 2002; 52: 395–400. [PMC free article] [PubMed] [Google Scholar]
  • 11. Gardner H. The mind’s new science: a history of cognitive revolution. New York: Basic Books, 1987, pp 165–170. [Google Scholar]
  • 12. Schmidt HG, Norman GR, Boshuizen HPA. A cognitive perspective on medical expertise: theory and implication. Acad Med 1990; 65: 611–621. [DOI] [PubMed] [Google Scholar]
  • 13. Charlin B, Boshuizen HPA, Custers EJ, et al. Scripts and clinical reasoning. Med Educ 2007; 41: 1178–1184. [DOI] [PubMed] [Google Scholar]
  • 14. Wilkinson GT, Bates MJ. A possible further clinical manifestation of the feline eosinophilic granuloma complex. J Am Anim Hosp Assoc 1984; 20: 325–331. [Google Scholar]
  • 15. Mason KV, Evans AG. Mosquito bite-caused eosinophilic dermatitis in cats. J Am Vet Med Assoc 1991; 198: 2086–2088. [PubMed] [Google Scholar]
  • 16. Davies C, Shell L. Common small animal medical diagnoses: an algorithmic approach. Philadelphia: WB Saunders, 2002, pp 1–261. [Google Scholar]
  • 17. McKenzie BA. Commentary – veterinary clinical decision-making: cognitive biases, external constraints, and strategies for improvement. J Am Vet Med Assoc 2014; 244: 271–276. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Click here for Supplementary Video

Richard Malik interviews Paul Canfield, lead author of the JFMS clinical reasoning series. To access this video, please visit: https://player.vimeo.com/video/150981038


Articles from Journal of Feline Medicine and Surgery are provided here courtesy of SAGE Publications

RESOURCES