Skip to main content
Journal of Feline Medicine and Surgery logoLink to Journal of Feline Medicine and Surgery
. 2016 Jan 5;18(1):35–45. doi: 10.1177/1098612X15623116

Case-based clinical reasoning in feline medicine

1: Intuitive and analytical systems

Paul J Canfield 1,, Martin L Whitehead 2, Robert Johnson 3, Carolyn R O’Brien 4, Richard Malik 5
PMCID: PMC11148876  PMID: 26733547

Abstract

Aim:

This is Article 1 of a three-part series on clinical reasoning that encourages practitioners to explore and understand how they think and make case-based decisions. It is hoped that, in the process, they will learn to trust their intuition but, at the same time, put in place safeguards to diminish the impact of bias and misguided logic on their diagnostic decision-making.

Series outline:

This first article discusses the relative merits and shortcomings of System 1 thinking (immediate and unconscious) and System 2 thinking (effortful and analytical). Articles 2 and 3, to appear in the March and May 2016 issues of JFMS, respectively, will examine managing cognitive error, and use of heuristics (mental short cuts) and illness scripts in diagnostic reasoning.

Introduction

In this era of evidence-based veterinary medicine, where accuracy and objectivity have perhaps become valued more than scepticism and open-mindedness, is it heresy to use intuitive pattern recognition to diagnose disease in practice? Many who advocate the exclusive use of an analytical, problem-oriented approach would answer with an emphatic ‘yes’, believing the thought processes of pattern recognition to be fraught with inaccuracy and cognitive error. While the (sometimes spectacular) failings of intuitive reasoning cannot be denied, it is accepted that experienced clinicians, especially elite clinicians, find it difficult not to engage in intuitive reasoning from the moment they first see the patient and its owner.1,2

Furthermore, if intuitive pattern recognition or other forms of intuitive reasoning are used more effectively through conscious refining, 3 this will save time and money for clients by sharpening diagnostic accuracy and reducing unnecessary tests and procedures. Moreover, intuitive reasoning, especially pattern recognition, comes naturally and instinctively to many clinicians, even if they cannot appreciate how and why it occurs.

Is there room for using both approaches? This question is truly rhetorical, as there is a wealth of evidence from the psychology, economics and medical literature to establish the power of using both cognitive approaches, and Nobel Laureates have testified to the fact! 4

This article argues that there is merit in using both intuitive reasoning (pattern recognition being the most well known form of System 1 thinking) and the analytical, problem-oriented, forward reasoning approach (as a trained form of System 2 thinking), especially sequentially. This is on the proviso that (i) intuitive reasoning be instinctively improved over time through the accumulation of disease patterns and ‘illness scripts’ (ie, recalled and often complete cases of disease held in long-term memory), (ii) mental short cuts (called heuristics) are used appropriately to help choose among diseases with overlapping features), and (iii) System 2 thinking can be finely tuned through training to suit diagnostic reasoning.

graphic file with name 10.1177_1098612X15623116-img1.jpg

Moreover, we present evidence to support the use of both systems of clinical reasoning to ensure optimal and hopefully cost-effective outcomes for the patient and client.5,6 Our hope is that readers will come to feel comfortable in trusting their intuition as well as their analytical prowess in reaching a diagnosis. We are mindful that whenever intuitive System 1 thinking is employed, it should, where possible, be tested and backed up with evidence provided by analytical System 2 thinking.

graphic file with name 10.1177_1098612X15623116-img2.jpg

What is meant by diagnostic reasoning?

There is no absolute definition of clinical reasoning. But, as for all reasoning, it involves problem-solving through the two processes of (i) identifying and defining a problem, and (ii) choosing between established alternatives for its resolution. 8 For the veterinary practitioner, identifying and defining a problem involves the initial gathering of relevant information to enable a choice between alternatives. This initial collection and collation of data occurs through all the senses (the process of perception), but relies strongly on what one sees, hears and smells.

Decision-making begins early, at the stage of perception, because collation of sensory input requires prioritisation. It is even more apparent in the connected but later process of either deduction or abduction (see page 6 of accompanying Editorial 9 ). This trained form of System 2 thinking is often called ‘hypothetico-deductive reasoning’, as it relies heavily on using accepted hypotheses (premises) about disease to determine the cause of a patient’s problem. 7 It is based on one of the main characteristics of a form of common-sense reasoning first described by Popper, and then Hempel, in the late 1950s and early 1960s.1012 Stated simply, rarely does one have complete information from which a decision can be made; but a presumptive decision (hypothesis) can be at least partially corroborated through additional testing.

It could be argued that this type of reasoning (which evolved into analytical, problem-oriented, forward reasoning) and intuitive reasoning both use accepted premises as the starting point for diagnostic decision-making. However, in intuitive reasoning the process is so rapid that the starting premises are commonly buried in the subconscious (ie, the cues for the likely disease are not obvious to the user). With reflection, and external or self-analysis, it is sometimes possible to unravel the workings of the subconscious mind to tease apart how and why a plausible disease emerged. This is well explored in Gladwell’s book ‘Blink – The power of thinking without thinking.’ 13

What are the similarities and differences between intuitive System 1 and analytical System 2 approaches to diagnosis?

To better identify differences between System 1 and System 2 thinking, it is important to understand what similarities they share. Both are forms of clinical reasoning. Both begin with an accepted illness, which leads to either conscious or unconscious thought about what the illness is and what has caused it. The two forms of thinking then diverge in the way they try to get to the answer (ie, the way they form hypotheses about the disease). Table 1 describes many of the differences between the two approaches. The fact is, however, that they both share a common starting point (‘this cat is ill’) and that both are trying to make a correct diagnosis. Moreover, both can be affected by faulty thinking (cognitive error), especially unsupported pattern recognition because it relies heavily on cognitive bias to quickly get to the answer. (As will be discussed in Article 2, cognitive bias has the potential to lead, but also to mislead unless managed effectively; if it misleads then it results in a cognitive error.)

Table 1.

Comparison of System 1 and System 2 thinking in relation to clinical reasoning

Intuitive System 1 thinking Analytical System 2 thinking
Relies heavily on patterns and illness scripts. This is most useful for making judgements on problems which seem familiar and for which rapid action is required Analytical, problem-oriented approach. This is most useful for making judgements concerning a problem when you find yourself in unfamiliar situations and have more time to make decisions
A natural system of thinking, which improves with continuing experience and awareness Can be improved by experience and awareness, as well as formal training prior to and after graduation (ie, ‘trained’ System 2 thinking)
Judgements are commonly based on intuition, drawing on past experiences and any attached emotion.* This appears unconscious and decisions may be difficult to explain Judgements are based primarily on critical evaluation of evidence and facts, but some emotion* may be connected to drawn-on past experiences. This trained form of System 2 thinking is always conscious
Fast Slow, but degree dependent on experience
‘The judgement feels right’ and uncertainty is often ignored (ie, commonly the focus is on existing evidence and absent evidence is ignored). The most plausible explanation is generally the one accepted Logical, analytical reasoning, which may be hypothesisdriven and/or data-driven. Uncertainty is tested and options explored
Relies primarily on long-term memory (especially simple patterns and illness scripts based on past experiences, both positive and negative) Relies primarily on analysis of information through shortterm (working) memory. Long-term memories are also important because they contribute to the analysis
Unconscious cognition of perceptual input; appears to operate effortlessly and automatically Conscious cognition of perceptual input; operates with forced or deliberate effort and control
Heuristics (mental ‘short cuts’ for reasoning) are commonly employed Intentional thinking, involving sometimes the use of reasoning strategy heuristics, constructed generic illness scripts and diagnostic algorithms
Used by both novice and experienced clinicians, but most successfully by the latter Used by both novice and experienced clinicians, but more often by the former
Can be modulated, but cannot be completely overridden, by System 2 thinking Comes into play particularly when System 1 fails to identify the problem or its solution
Important in prehistoric hunter gatherer groups and could be regarded as the ‘default system of thinking’ for instant decision-making (‘fight or flight’ situations) 14 Developed and refined over time, but likely originating from the need for individual and collective decision-making in prehistory (ie, an extension of System 1 thinking) 14

Note that Systems 1 and 2 are not mutually exclusive, as they share some features in common. Both begin by accepting that an illness exists and aim to establish what it is likely to be and what has caused it. To answer these questions, both systems require perception and an ability to draw on memory. Both use reasoning to identify problems and to choose between alternatives. While this table lists purported differences between the thinking styles, the boundaries between these differences are often nebulous. This is to be expected as the human brain has the capacity to move rapidly, and apparently effortlessly, from one system of thinking to the other and then back again!

*

The term emotion is used here to designate any affective state, be it subtle or intense, that may influence clinical judgement

Analytical System 2 thinking has its own set of cognitive biases, but tries to diminish the risk of them leading to cognitive error by evaluating perceptual information, especially by employing clinical algorithms (Figure 1) and other objective forms of reasoning. These devices help ensure that hypotheses are constructed by explaining key problems based on available clinical, laboratory and imaging findings. This analytical, problem-oriented approach relies heavily on abduction and related ‘Bayesian analysis’, which itself focuses on probabilities (and could be unkindly referred to as ‘playing the odds’!). Thomas Bayes was an 18th century British vicar and mathematician who developed a theory that quantified the effects of current evidence in increasing the probability that the patient has a particular disease. It has been said that physicians are ‘natural Bayesians’ and that all clinical decision-making is fundamentally Bayesian. 15 Could this equally apply to most veterinary practitioners? Or is it more complicated, because we are more limited by financial constraints in relation to the number of tests we can routinely perform?

Figure 1.

Figure 1

The diagnostic algorithm is very useful in the analytical problemoriented approach to clinical diagnosis. But it is not foolproof!

Why do some of us favour one system over the other in the way we think about clinical problems?

The short answer is we don’t really know, but perhaps inherited and acquired personality traits play a key role in dispositional tendencies. It was once suggested that individuals inclined to use more of the right side of the brain feel more comfortable with intuitive thinking, while those who use the left side of their brain feel more comfortable with the analytical, problem-oriented diagnostic approach that System 2 thinking has to offer. 16 However, like most good stories, it has been refuted through further research, particularly neuroimaging studies. 17 What is accepted, however, is that those of us who do favour System 2 thinking like using lists and organisational charts, most of which can now be computerised (Figure 2). While System 2 thinking is criticised for being costly, time-consuming and stifling of creativity, it is probably more protective of mistakes being made in the reasoning process, so long as the detection (perception) component is not flawed. The novice clinician will find System 2 thinking tiring and laborious, but fruitful in terms of developing understanding and expertise about disease presentations. Importantly, once a newly encountered disease condition has been diagnosed using System 2 thinking, the stored case information, either as a simple pattern or a more complex illness script, becomes accessible through an unconscious System 1 thinking mechanism the next time such a problem is encountered.

Figure 2.

Figure 2

Lists are very important in System 2 thinking, but can slow down progress towards a diagnosis if common sense is not applied. ‘Ockham’s razor’ seems pertinent here: ‘all other things being equal, the simplest explanation is usually the best’! Brother Ockham was always looking for the best, if not absolute, explanation for a problem. Ockham’s razor is a ‘heuristic maxim’, or rule of thumb, guiding investigation of a case down the easiest course

Central to System 2 thinking is the ability to consciously resolve a large number of signs and findings into a smaller finite number of ‘mini-patterns’ or problems. Generally there should be fewer than five problems (working, or short-term, memory usually struggles with more than five), 18 so that the amount of information to be processed is not insurmountable. Advocates of System 2 data-driven (rather than hypothesis-driven), forward thinking prefer this approach as it localises disease anatomically, or to an organ system, before deciding on the type of lesion or its cause.6,19 For example, a cat with ptosis, miosis and prolapse of the nictitating membrane, a weak ipsilateral blink reflex and a head tilt to the same side (Figure 3) can be reduced to Horner’s syndrome (thereby condensing three apparent problems into one), facial nerve dysfunction and ipsilateral peripheral vestibular disease, the triumvirate being most suggestive of unilateral middle ear disease. Branching algorithms are also often intercalated into System 2 approaches to facilitate the diagnostic process. Because System 2 thinking is in part driven by available evidence in the literature (‘evidence-based medicine’), if that information is not available or is limited, which is often the case in veterinary medicine, 20 then progress in forward thinking can be hindered or even misdirected.

Figure 3.

Figure 3

A cat with ptosis, miosis and prolapse of the nictitating membrane, a weak ipsilateral blink reflex and a head tilt to the same side – most suggestive of unilateral middle ear disease. Courtesy of Vic Menrath

By contrast, intuitive System 1 thinking using pattern recognition is about the emergence or recognition of complete patterns (ie, a final diagnosis such as tetanus) or sometimes partial patterns (‘I know it’s got liver disease but I don’t know what’s causing it’) through unforced or unconscious thought. This form of abductive reasoning asks the question, sometimes unwittingly, ‘what is the most plausible explanation for the pattern?’ 21 Although it still comes up with a hypothesis (a likely specific diagnosis), as does System 2 thinking, it does it in a way that relies on long-term memories of past cases and often ignores what is regarded as extraneous data (referred to by engineers and neuroscientists as ‘noise’). In essence it is a recall of one or more past experiences (especially your own clinical experiences, but also from texts read or seminars attended). It relies on key aspects of the history, which are often interconnected, observation of the patient, physical findings, and characteristic diagnostic imaging or laboratory findings. These triggered past experiences may be simple patterns or may take the form of extended case histories, referred to as ‘illness scripts’. Such scripts are strongly suggestive of either a final diagnosis or a fruitful avenue for further investigation through an established hypothesis.

As will be discussed in Article 3, disease patterns and case information in the form of recalled illness scripts in intuitive reasoning may be ‘generic’ (a collation of similar cases) or ‘instance’ (a specific similar case) in form. 22 The apparently effortless retrieval of instance and generic illness scripts from long-term memory in System 1 thinking is poorly understood, but certainly comes more easily to experienced elite clinicians. 22 Generic illness scripts consciously used as constructs can also be an adjunct to the use of diagnostic algorithms, which are commonly utilised in System 2 thinking.

The terms ‘gestalt’ (perceptual organisation out of chaos – ‘an organised whole that is perceived as more than the sum of its parts’) and intuition (possessing a capacity to immediately sense the solution/problem and commonly outside of conscious reasoning processes) are often applied to unconscious System 1 thinking. As mentioned, pattern recognition is a common form of intuitive System 1 thinking used by practitioners. It relies on the recognition of a group of related clinical signs for a specific disease. System 1 thinking becomes a little more involved when several diseases need to be considered because of overlapping case features and this is where heuristics, as mental short cuts, can come into play (discussed in Article 3).

graphic file with name 10.1177_1098612X15623116-img3.jpg

When do the two systems work best?

System 1 thinking works best when the clinician is skilled and/or experienced in identifying clues (cues) – which are triggers for the most likely disease. These cues are gathered from the owner, the patient, diagnostic imaging and the laboratory. Much of the time, the triggers for understanding and diagnosing the disease condition are hidden in the subconscious and cannot be well articulated by the diagnostician, but the skilled clinician will accept that is the case and attempt to support his or her intuition through appropriate diagnostic testing. This is when System 2 thinking comes into play in a supportive manner: to assess if cognitive errors have been made.

Intuitive System 1 thinking often appears automatic and effortless, but actually involves deep unconscious thought to reach a plausible diagnosis. For it to work most effectively, the cues that trigger this type of thinking have to have consistency and regularity (ie, high validity), which is often the case for typical presentation states (‘illness scripts’, if you will) of important common feline diseases: for example, cat fight abscess, chronic renal disease, hyperthyroidism or alimentary large cell lymphoma (Figures 46).

Figure 4.

Figure 4

This pup has generalised tetanus. The cephalic manifestations of generalised tetanus are very distinctive – erect ears, ‘lock jaw’, prolapse of the nictitating membrane, ‘sardonic grin’, reduced palpebral fissures and exaggerated response to tapping on the forehead (which often causes spasm of the retractor bulbi muscles). Interestingly, the diagnosis of generalised tetanus is almost invariably a System 1 diagnosis based on this characteristic generic illness script. If the clinician has seen one of these cases previously (even in a different species), then he or she may diagnose it based on an ‘illness script’, by recalling the first or most recent or most similar case they have encountered. It is exceedingly difficult to confirm a diagnosis of tetanus, because there are no characteristic laboratory findings, the inciting wound site is often impossible to find, and causal Clostridium tetani bacteria cannot be cultured because of their strict anaerobic habit. However, by considering these difficulties, the clinician is engaging in System 2 thinking, and may even come up with some alternative, but less likely, diagnoses (eg, canine variant of the stiff man syndrome caused by defective glycine receptors in the central nervous system, a disease that occurs in humans, sheep and dogs, though not in the cat to the best of our knowledge). Courtesy of Anne Fawcett

Figure 5.

Figure 5

This domestic shorthair cat (and the cat in Figure 1b in the Editorial of this issue) 9 has local tetanus (disinhibition of the motor neuronal pool subserving a single anatomical region caused by the action of tetanospasmin transported retrogradely along axons), another very distinctive illness script. 23 But what about other causes of local nerve alteration? Good knowledge of pathophysiology is required for a cogent System 2 approach, because focal nerve lesions are almost always characterised by reduced function, rather than increased neural activity, making focal neuropathy a most unlikely alternative diagnosis. Courtesy of Carolyn O’Brien

Figure 6.

Figure 6

This cat has cervical ventroflexion of the head and neck, a feature highly suggestive of myopathic weakness, which can have many causes. However, a heuristic (mental short cut) employed by an expert clinician might place hypokalaemic polymyopathy as the most plausible explanation (top ‘pattern’), even though this is a Tonkinese rather than a Burmese! 24 Because of the multiple potential causes of muscle weakness in the cat, an expert clinician will then engage in System 2 thinking and consider alternative diagnoses (eg, the posture is equally suggestive of hyperaldosteronism in mature adult cats, certain envenomations and myasthenia gravis), as well as confirming the likely top pattern. Confirmation, in this case, would rely on a PCR test for the underlying single nucleotide polymorphism that causes WNK4 kinase deficiency, 25 the enzymatic defect in the kidney which causes the potassium-wasting nephropathy. Courtesy of Chris Simpson

When dealing with new diseases, unusual manifestations of common diseases, multiple disease conditions or diseases in novel breeds (or species), System 1 thinking can struggle because of the lack of valid cues. Even the most skilled clinician can be misled by bias (Figure 7). For this reason, when confronted with unusual presentations, complex problems, or older cats, in which multiple comorbidities can be present, the conscious problem-oriented, data-driven System 2 approach to diagnosis, especially forward reasoning for moving from the general to the specific, is likely to be more successful.

Figure 7.

Figure 7

For pattern recognition to be effective, a disease has to be familiar to the clinician. Is this a case of System 2 thinking nonsensically overriding System 1 thinking?

How do affective states influence clinical reasoning?

Numerous factors influence clinical reasoning. Veterinary practitioners are usually all too aware of when they are feeling tired or when they are being distracted by other matters, and this may contribute negatively through cognitive error related to both systems of thinking. However, what is perhaps less obvious is the fact that affective states (emotional feelings, traits, mood and sentiments) can influence the two systems of thinking in both a positive and negative way (see box below).

graphic file with name 10.1177_1098612X15623116-img4.jpg

Affective states, along with unknown factors, may also contribute to clinical reasoning in other ways. For example, by influencing when you are sharpest during the day (‘morning person’ vs ‘evening person’; perhaps you are better after that first cup of coffee?). For some people, a ‘eureka’ or ‘light bulb’ moment inexplicably occurs much later in the day. One of the authors has some of his best insights on the drive back from the practice, in the shower or after ‘sleeping on it’.

The way memories of cases are stored in and retrieved from long-term memory – and, in particular the potential for emotion to be attached to long-term memories (see box on page 41) – can significantly influence clinical reasoning. It is accepted, of course, that long-term memories are a key factor that lead to a plausible diagnosis in intuitive system 1 thinking. In analytical System 2 thinking, long-term memories are still important, but are drawn on mainly to support analysis of data through working (short-term) memory.

Whether it be illness in an individual pet affecting an owner, or the risk of disease affecting an entire cattery or boarding facility and related livelihoods, there is no shortage of emotionally charged situations in feline medicine. While as veterinarians we try to ensure that emotional states experienced by owners and their pets do not overly influence our clinical decision-making (and are usually good at detaching ourselves), we also need to be empathetic with owners in order to understand their concerns and needs. Consequently, long-term memories of certain clinical scenarios are often linked with an affective component experienced by the owner or even the patient. Moreover, some veterinarians may be affected by a situation because of its outcome or because of past experiences with similar cases or clients, or even the same client. The feelings experienced by the veterinarian may be subtle or intense depending on their past experiences, present mood and personality, but they still may influence clinical reasoning when retrieving memories.

We are not suggesting that intense happiness, sadness, anger, fear, disgust, surprise or contempt is felt with every case retrieved and utilised in clinical reasoning; in fact many clinically relevant memories appear totally devoid of an affective tag. Rather, we are suggesting that emotional feelings, like all affective states, need to be managed whenever engaging in clinical reasoning, to avoid clinical error.

graphic file with name 10.1177_1098612X15623116-img5.jpg

A useful aspect of emotion, relevant to both System 1 and System 2 thinking, relates to empathy or emotional connection with the owner in the context of history-taking. Empathy can lead to valuable and truthful information being obtained through cooperative dialogue – and we all know how vital a good history is in the pursuit of the correct diagnosis. Empathy relies on a well-developed network of ‘mirror neurons’, which are important in relating to, and learning from, others. 27 For the veterinarian, this involves understanding the problem from the client’s point of view, so that the way forward to solve the patient’s problem is mutually acceptable, in relation to cost, risk and invasiveness. Melding of emotion with rational, factual thinking is mandatory for effective cooperation.

Managing emotions in decision-making is recognised as an important component of emotional intelligence and relies on self-awareness. 28

Can and should Systems 1 and 2 thinking be used in tandem?

We suspect that many practitioners are reticent, perhaps through peer pressure, to admit that System 1 thinking, especially in the form of simple pattern recognition for routine cases, becomes increasingly important as they get older and more experienced. But isn’t this a natural phenomenon, unless one’s ability to be intuitive is deliberately suppressed? Moreover, we doubt many would disagree with the assertion that the best diagnosticians are those who utilise all their brain in diagnostic reasoning and happily switch between intuitive reasoning and analytical, problem-oriented approaches to diagnosis. Indeed, it has been suggested that fewer mistakes are made in medical diagnosis when both forms of diagnostic reasoning are combined and utilised in interpreting data; in other words, Systems 1 and 2 thinking are used in tandem. 29

No such research exists for veterinary diagnostic reasoning, but the implication cannot be ignored. And just because we may feel comfortable using predominantly one or other system of thinking, due to the influences of personality, past experiences and present mood, we should not assume that the brain cannot be trained to adapt to using both in tandem (after all, it is plastic!). It is a matter of awareness, confidence, practice and resetting ways of thinking. Not necessarily easy, of course, and it does require effort. The box on page 42 provides our perspective on how novice and expert clinicians might engage in Systems 1 and 2 thinking, and how switching might occur.

graphic file with name 10.1177_1098612X15623116-img6.jpg

Perhaps the key to improved diagnostic acumen is to learn to trust your intuition in System 1 thinking, but understand under what circumstances it might let you down or need to be supported through more methodical, deliberate System 2 thought processes, which utilise evidence-based medicine to improve diagnosis. In other words, don’t be concerned if initially you think you know what the problem is through apparent pattern recognition; but ensure that you back it up with appropriate testing and by actively excluding alternative diagnoses. Perhaps the penultimate word on tandem thinking should be left to Fodor, 30 who stated, ‘Nature has contrived to have it both ways, to get the best out of fast dumb systems and slow contemplative ones, by simply refusing to choose between them. That is, I suppose, the way Nature likes to operate: “I’ll have some of each – one damned thing piled on top of another, and nothing in moderation, ever.”’

The great Sherlock Holmes, the creation of Sir Arthur Conan Doyle, a medical graduate himself, was a maestro at using System 1 and System 2 thinking in tandem to solve cases. He may have engaged heavily in intuitive thinking – like all good detectives – to save time and perhaps to impress; but he was equally at home using a more regimented and sequential analytical approach. In fact he was capable of blending the two systems of thinking perfectly. He allowed the case to dictate his thinking. Sherlock’s advice to all diagnosticians – if he were real and alive today – might well be to use all of your ‘plastic’ brain to solve cases and to continue to encourage hippocampal stem cell proliferation through a hunger for new knowledge!

graphic file with name 10.1177_1098612X15623116-img7.jpg

graphic file with name 10.1177_1098612X15623116-img8.jpg

Figure 8.

Figure 8

Elderly cat with abrupt onset of abnormal behaviour. Note the bilaterally dilated pupils which did not respond to light

graphic file with name 10.1177_1098612X15623116-img9.jpg

Figure 9.

Figure 9

The adrenal mass cranial to the left kidney was most likely an aldosterone-producing adrenal tumour

Final comments

This article has attempted to present what is known about, as well as our views on, clinical reasoning. We acknowledge that this represents just the tip of the iceberg. We also acknowledge that much of the discussion may seem rather esoteric and far removed from the way that you normally operate in clinical practice. Hopefully, however, this will encourage you to think about the way you think about cases. In doing so you may discover whether you are naturally drawn to intuitive System 1 thinking or to analytical, problem-oriented, data-driven System 2 thinking, and that at times you flip back and forwards between the two. Most importantly, we ask you to accept that flawed or misleading thinking is a phenomenon shared by any system of thinking, and by all clinicians, whether novice or experienced. The trick is to be aware and try to diminish the impact of flawed thinking on diagnostic reasoning. Of course, telephoning a friend for advice may assist you in overcoming any misguided thinking on your part; but be aware that they too may have similar or different flawed thinking!

graphic file with name 10.1177_1098612X15623116-img10.jpg

Supplemental Material

Click here for Supplementary Video

Richard Malik interviews Paul Canfield, lead author of the JFMS clinical reasoning series. To access this video, please visit: https://player.vimeo.com/video/150981038

Acknowledgments

Richard Malik is supported by the Valentine Charlton Bequest administered by the Centre for Veterinary Education of the University of Sydney. The authors wish to thank Daniel Krochmalik for introducing us to the novels of Malcolm Gladwell, especially ‘Blink’, and Ross Gittins and Robert Krochmalik for introducing us to the work of Daniel Kahneman and Amos Tversky. Discussions with Steven Holloway on ‘mini-patterns’ were helpful in crystallising our own view of the importance of pattern recognition and heuristics in elite veterinary diagnosticians.

Footnotes

Funding: The authors received no financial support for the research, authorship and/or publication of this article.

The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

Contributor Information

Paul J Canfield, Faculty of Veterinary Science, B14, University of Sydney, NSW 2006, Australia.

Martin L Whitehead, Chipping Norton Veterinary Hospital, Banbury Road, Chipping Norton, Oxon, OX7 5SY, UK.

Robert Johnson, South Penrith Veterinary Clinic, 126 Stafford Street, Penrith, NSW 2750, Australia.

Carolyn R O’Brien, Faculty of Veterinary Science, The University of Melbourne, Parkville, VIC 3152, Australia.

Richard Malik, Centre for Veterinary Education, B22, University of Sydney, NSW 2006, Australia.

References

  • 1. Groopman J. How doctors think. 1st ed. Boston: Houghton Mifflin Company, 2007. [Google Scholar]
  • 2. Sanders L. Every patient tells a story. Medical mysteries and the art of diagnosis. London: Viking – Penguin Books, 2009. [Google Scholar]
  • 3. Schmidt HG, Norman GR, Boshuizen HPA. A cognitive perspective on medical expertise: theory and implication. Acad Med 1990; 65: 611–621. [DOI] [PubMed] [Google Scholar]
  • 4. Kahneman D. Thinking, fast and slow. 1st ed. New York: Farrar, Straus and Giroux. 2011, pp 3–15. [Google Scholar]
  • 5. Ark TK, Brooks LR, Eva KW. Giving learners the best of both worlds: do clinical teachers need to guard against teaching pattern recognition to novices? Acad Med 2006; 81: 405–409. [DOI] [PubMed] [Google Scholar]
  • 6. May SA. Clinical reasoning and case-based decision making: the fundamental challenge to veterinary educators. J Vet Med Educ 2013; 40: 200–209. [DOI] [PubMed] [Google Scholar]
  • 7. Evans C, Kakas AC. Hypothetico-deductive reasoning. In: Proceedings of the International Conference on Fifth Generation Computer Systems 1992. Ed ICOT, pp 546–554. [Google Scholar]
  • 8. Eysenck MW, Keane MT. Thinking and reasoning. In: Cognitive psychology – a student’s handbook. East Sussex: Psychology Press, 2010, pp 457–458. [Google Scholar]
  • 9. Canfield P, Malik R. Think about how you think about cases [Editorial]. J Feline Med Surg 2016; 18: 4–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Popper KR. The logic of scientific discovery. New York: Basic Books, 1959. [Google Scholar]
  • 11. Popper KR. Conjectures and refutations: the growth of scientific knowledge. New York: Harper Torch, 1965. [Google Scholar]
  • 12. Hempel CG. Aspects of scientific explanation and other essays in the philosophy of science. New York: The Free Press, 1965. [Google Scholar]
  • 13. Gladwell M. Blink – the power of thinking without thinking. 1st ed. London: Penguin Books, 2005. [Google Scholar]
  • 14. Mithen SJ. Thoughtful foragers – a study of prehistoric decision making. Cambridge: Cambridge University Press, 1990. [Google Scholar]
  • 15. Gill CJ, Sabin L, Schmid CH. Why clinicians are natural Bayesians. Brit Med J 2005; 330: 1080–1083. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Herrman N. The creative brain, USA: McGraw-Hill, 1989. [Google Scholar]
  • 17. Nielsen JA, Zielinski BA, Ferguson MA, et al. An evaluation of the left-brain vs. right-brain hypothesis with resting state functional connectivity magnetic resonance imaging. PLoS One 2013; 8: e71275. DOI: 10.1371/journal.pone.0071275. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Carr N. Cognitive load. In: Brockman J. (ed). This will make you smarter. London: Transworld Publishers, 2012, pp 116–117. [Google Scholar]
  • 19. Sweller J. Cognitive load during problem solving: effects on learning. Cognit Sci 1988; 12: 257–285. [Google Scholar]
  • 20. McKenzie BA. Veterinary clinical decision-making: cognitive biases, external constraints, and strategies for improvement. J Am Vet Med Assoc 2014; 244: 271–276. [DOI] [PubMed] [Google Scholar]
  • 21. Yu CH, DiGangi S, Jannasch-Pennell A. The role of abductive reasoning in cognitive-based assessment. Elem Educ Online 2008; 7: 310–322. [Google Scholar]
  • 22. Charlin B, Boshuizen HPA, Custers EJ, et al. Scripts and clinical reasoning. Med Educ 2007; 41: 1178–1184. [DOI] [PubMed] [Google Scholar]
  • 23. Malik R, Church DB, Maddison JE, et al. Three cases of local tetanus. J Small Anim Pract 1989; 30: 469–473. [Google Scholar]
  • 24. Malik R, Musca FJ, Gunew MN, et al. Periodic hypokalaemic polymyopathy in Burmese and closely related cats: a review including the latest genetic data. J Feline Med Surg 2015; 17: 417–426. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Gandolfi B, Gruffydd-Jones T, Malik R, et al. First WNK4-hypokalemia animal model identified by genome-wide association in Burmese cats. PLoS One 2012; 7: 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Ashwell K. The brain book: development, function, disorder, health. New York: Firefly Books, 2012, pp 230–233. [Google Scholar]
  • 27. Ramachandran VS. The neurons that shaped civilization. In: The tell-tale brain – unlocking the mystery of human nature. London: Windmill Books, 2012, pp 117–135. [Google Scholar]
  • 28. Goleman D. Emotional intelligence – why it can matter more than IQ, London: Bloomsbury Publishing, 1996, pp 15–25. [Google Scholar]
  • 29. Greenhalgh T. Intuition and evidence – uneasy bedfellows. Brit J Gen Pract 2002; 52: 395–400. [PMC free article] [PubMed] [Google Scholar]
  • 30. Fodor JA. Précis of the modularity of mind. Behav Brain Sci 1985; 8: 1–5. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Click here for Supplementary Video

Richard Malik interviews Paul Canfield, lead author of the JFMS clinical reasoning series. To access this video, please visit: https://player.vimeo.com/video/150981038


Articles from Journal of Feline Medicine and Surgery are provided here courtesy of SAGE Publications

RESOURCES