Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
editorial
. 2021 Apr 20;5(2):100222. doi: 10.1016/j.ejtd.2021.100222

COVID-19 Has Turned Medical Science Into a Belief System

Cyril Tarquinio 1,, Yann Auxemery 1, Jenny Rydberg 1
PMCID: PMC9767387  PMID: 37521946

Since the first half of 2020, clinical and epidemiological COVID-19 research has been marked by a series of questionable cases concerning the effectiveness and safety of hydroxychloroquine, remdesivir and more recently vaccines that have caused turmoil across society and the media. Accompanied by a maelstrom of heated reactions, all three of these “findings” continue to make national and international headlines as the world collectively becomes embroiled in debate over the validity of the latest research. In France and the world over, divisive, and sensational coverage has not only widened an existing gap between society and science, but perhaps worse, it appears to have split the scientific community itself (in a universe that famously values theoretical and scientific debate as a fundamental precursor to scientific discovery and a sign of intellectual health, the sudden outpouring of stigmatizations, and sweeping and judgmental comments we seem to be witnessing is jarring). But should this shock us? No, it should not. Science, certainly medical science, constitutes a systematically organized body of knowledge. Scientific knowledge is not truth in and of itself, but is constructed, and through the critical process of validation, theories and research methodologies evolve – the more rigorous the scientific evidence, the more persuasive the science. Science relies on theoretical models that are proposed, tested and if they do not fit with the observed data, the theory is refined. Thus, scientific theories are transitory by nature and are essentially hypotheses that change over time. What was true yesterday can tomorrow prove false, which is both the strength and weakness of science. Strength in that theories bring together many facts and hypotheses gathered over time (which explains the ever-present qualifiers researchers use to preface findings: “based on current knowledge”, “everything being equal,”, etc.). Weakness because the common man has such a need for certainties and truths, that in crisis periods, such as the one we are experiencing now, dogma is fundamentally more reassuring. We might consider the latter as a universe of pseudo-knowledge, more often than not of beliefs, acknowledged in principle, little discussed, obvious, full of common sense, easy to understand, able to explain what we want to explain, diffused among social groups and relayed by social networks, and above all safe. Faced with this spectrum of reassuring beliefs, we all become Saint Thomas, who famously only believed what he saw with his own eyes. Except, if like Saint Thomas, we only believe what we see, we only see what we choose to look at and (surely) we only look at what we want to see. Our beliefs very much guide this choice and organize the reification of the world. If we are convinced that women are bad drivers, it's fairly certain that we will see only what conforms to that belief, regardless of the fact that statistics all over the world indicate that men have more traffic accidents.

Cognitive and social psychology can help us understand what drives us to malfunction, so to speak. To do this, however, it's important to get our heads around the fact that our brains regularly get things wrong, probably more than we think. Enter the elegant concept of heuristics introduced by Hebert Simon in the 1950s. Speaking to the decision-making process, Simon suggested that while we might strive to make rational choices, our judgment is subject to certain cognitive limitations. As such, instead of systematically exploring all possibilities to arrive at an optimal solution, we only consider a part of the parameters of a given problem. Heuristics are approximate and intuitive rules that provide answers that are satisfactory for the subject, without being optimal or even false. These mental shortcuts may well be valid, but in certain situations can also lead to errors and thus constitute cognitive bias. In this perspective, cognitive biases lead to false judgments which feed irrational choices. Importantly, however, a cognitive bias is not an error, but rather an error is a possible consequence of a cognitive bias.

This brings us to focus on two cognitive biases that are particularly enlightening with regard to perceptions of health and medicine during this period of unprecedented uncertainty due to the COVID-19 pandemic: the Dunning-Kruger Effect and the Framing Effect.

The Framing Effect was first described in 1981 by two important scientists, Amos Tversky and Daniel Kahneman. They showed that our decisions are influenced by the way information is presented, versus the nature of the facts themselves. This top-down influence is compelling evidence of irrationality in human decision making, since the information driving a decision can essentially be the same but it presented with positive or negative connotations.

In their 1981 experiment using the “Asian disease” problem (as incredible as this reads today), Tversky and Kahneman explored how different phrasing affected participants’ responses to a choice in a hypothetical life and death situation: “the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people”. Two alternative programs to combat the disease were then proposed to a first group of participants, in a positive frame (n = 152):

  • -

    If Program A is adopted, 200 people will be saved.

  • -

    If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved.

Participants responding to this positive frame chiefly chose Program A, or the certain option (72%).

In a second phase, the authors proposed another scenario to other participants (n = 155). Here again, the problem was an imminent Asian epidemic that, according to available estimates, could potentially kill 600 people. Again, two programs were proposed, in a negative frame:

  • -

    If Program C is adopted, 400 people will die.

  • -

    if Program D is adopted, there is 1/3 probability that nobody will die and 2/3 probability that 600 people will die.

In this scenario, as many as 78% of participants chose program B.

This amazingly timely example of context shows that changing the way information is presented changes people's perceptions, which may also change their risk behaviors, despite the fact that the information being conveyed is the same. Thus, when given a choice between two options perceived as positive (positive framing) where there is a gain (“people will be saved”), subjects tend to be risk-averse. Conversely, when asked to choose between two options perceived as loss (negative framing, “people will die”), the capacity and appetite for risk increases. In essence, the pain of losing is psychologically twice as powerful as the pleasure of gaining, and as such, people are more willing to take risks to avoid a loss than to make a gain.

Strike a familiar chord? It should. The stream of information we are faced with currently is largely negatively framed, which reduces our aversion to uncertain solutions. This is compounded by the tendency of individuals to react to imminent risks (COVID-19), while ignoring potential long-term risks, namely the side effects of treatments. Thus, the option of treating all patients indiscriminately with a drug whose efficacy is debatable and whose side effects on COVID-19 patients are unknown, becomes an acceptable option. We can see that our choices are not entirely rational, far from it — Tversky and Kahneman could well have predicted in the 1980′s what is happening today with hydroxychloroquine or with vaccines. Let's not forget, humans are anything but rational, regardless of how well we convince ourselves that the opposite is true.

The Dunning-Kruger effect, coined by the psychologists David Dunning and Justin Kruger in 1999) is a cognitive bias that can be summarized in a simple question: “What makes incompetent people so confident?”. The two American psychologists made first mention of it in a well-known American psychology journal where they elaborated on the concept as a cognitive bias in which poor performers greatly overestimate their abilities. This theory was originally inspired by the criminal case of McArthur Wheeler, a Pittsburg, Pennsylvania resident who set out one evening in 1995 to rob two banks. Unluckily, he was arrested the same night due to his fundamental misunderstanding of the chemical properties of lemon juice, which he believed would act like invisible ink and make him invisible to the surveillance cameras. Sadly, this was not the case. Intrigued, Dunning and Kruger began investigating how this person could have made such a bad decision and in 1999 published their study “Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments”.

According to these authors, the problem is not simply that less competent people have trouble recognizing their shortcomings. What they found was that the bias results from an internal illusion, or the psychological phenomenon of illusory superiority, which only augments the problem in that people lacking in ability or knowledge are sure that they are competent. To really bring this phenomenon into focus, surely each of us have listened to someone talk about a subject with an authority or expertise they clearly don't have (for example, the biological and cellular mechanisms underlying our bodies’ reactions to COVID-19, how hydroxychloroquine functions, the mechanism of messenger-RNA, or how research protocols are constructed, and the relevance of this or that vaccine in relation to others). In contrast to this phenomenon, Dunning and Kruger observed that the most qualified people tend to underestimate their abilities and competence levels and think that tasks that are easy for them are just as easy for others.

On the Graph 1 , the x-axis is our level of expertise about a subject. The y-axis is our confidence in our judgments about that subject. The Dunning and Kruger effect accounts for the fact that this line slopes downwards: the less expert we are, the more confident we are in our judgments.

Graph 1.

Graph 1

In other words, and to paraphrase Charles Darwin, "Ignorance more frequently begets confidence than does knowledge”. Simplified to the extreme, Dunning and Kruger's theory can be understood as follows: those who are less than competent judge themselves to be better than they are.

For example, as we are currently witnessing, it is the people who are the least knowledgeable about viruses (i.e., you and me) who have the most confidence in their judgments, going so far as to think that their opinions are more reliable than those of the experts. We are the ones who believe (and will defend our right to believe) that hydroxychloroquine or a particular vaccine will save us (which may or may not be true). Thus, overestimating our own rationality acts as an obstacle to knowledge.

For more than a year now, the cumulative effect of various controversies truly seems to be just added confusion to the debate, and this is largely due to the Dunning-Kruger effect. We overestimate our own competence in the medical field, which leads us to self-medicate (and to take other risks), to queue up in front of the door of a mediatized service or, conversely, to refuse to take part in clinical trials if we are not ready to test the effects of hydroxychloroquine, or maybe we cancel our appointment to be vaccinated because we happen on information about vaccines that feels contradictory.

Paradoxically, when a person starts to become competent, he or she quickly discovers the extent of his or her ignorance, which results in a collapse of confidence. Hence the modesty of researchers and the precautions they take when discussing or reporting their findings, particularly with regard to research currently being conducted on the multiple facets of the COVID-19 virus.

The dissemination, chaotic nature and poor quality of much of what we're seeing in clinical and epidemiological COVID-19 research is starting to resemble the virus itself: unpredictable. And that's without mentioning the risk of increasing conflicts of interest and fraud that could well follow the growing trend over recent years of giving more focus to the financial and entrepreneurial interest of research. But we can be sure that all clinical and epidemiological findings will be subject to fiercely critical review. "There is no epistemology specific to this or that type of disease: whatever the field of disease, the methodological principles, procedures and criteria for validating knowledge remain the same. Nor is there a "crisis epistemology" that would justify conducting research intuitively, without rules or rigor, according to personal or institutional interests or the demands of the media and social networks" (Coste, Bizouarn Leplège, 2020).

It is this very context that motivated us to organize a special issue of the journal focusing on the impacts of COVID-19 on psychological well-being. What is true for public health will also likely be applicable to clinical psychology and we will need time to measure the relevance of work either produced too rapidly or motivated purely by competition. Regardless, the articles we selected for this issue have all been analyzed in painstaking detail and carefully reviewed. Our hope is that this work will help us to better understand the clinical situation of people suffering from the consequences of the epidemic.

Reference

  1. Coste J., Bizouarn P., Leplège A. The troubled epistemology of the first wave of Covid-19 research. Journal of Epidemiology and Clinical Health. 2020;68(5):269–271. doi: 10.1016/j.respe.2020.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from European Journal of Trauma & Dissociation are provided here courtesy of Elsevier

RESOURCES