Table 8.
Examples of measures for dealing with the cognitive vulnerabilities as a way to combat misinformation
| Vulnerability | Defense | Example |
|---|---|---|
| Confirmation bias & Repeated exposure |
Digital platforms & Computer science |
Social media and web engines could adapt their algorithms to expose users to a greater diversity of narratives, reducing the existence of filter bubbles. We suggest the use of stance detection methods to identify divergent texts, and the redesign of digital platforms’ interface, prioritizing the balance of opinions. A new presentation way could, for example, show conflicting viewpoints side by side when exhibiting disputed stories. |
| Motivated reasoning & Biased assimilation | Education & Journalism | Literacy approaches could be used to make readers aware of their cognitive biases, encouraging a critical thinking and the reader engagement with a broad range of content. Educational strategies should also focus on teaching readers how to differentiate factual texts from opinionated material and on raising awareness of bad journalistic practices, such as the use of clickbait, personal attacks, or fallacies. |
| Hostile media effect | Journalism & Education | News outlets could represent significant views fairly, proportionately, and, as far as possible, without editorial bias [19]. The ethical commitment must guide journalistic conduct, and, more than ever, these professionals must act as gatekeepers, investigating and denouncing individuals and institutions that manufacture untruths. |
| Denial transparency & Backfire effect | Computer science, Education & Journalism | Computer scientists should keep in mind that most individuals do not understand how machine learning models work. Thus, rather than presenting opaque verdicts on news veracity, computational solutions should offer explainable and interpretable misinformation indicators [136]. The best way to present the outcomes of the decision-making process should be discussed with educators, journalists, psychologists, social scientists, and UX experts. |
| Group polarization | Digital platforms, Computer science & Education | Social networks can use machine learning algorithms to identify filter bubbles and monitor the visual or textual material shared in these groups. Once the intensification of polarization is detected, platforms should apply educational campaigns aiming to promote dialogue, but also considering the specificities of each group. |
| Emotion | Digital platforms & Governmental solutions | Social media could regulate and curb hate speech by limiting the influence of polarizing content, and restricting the exposure and reach of hateful material. Digital platforms can be even more proactive in combating this type of practice, alerting legal authorities about crimes of slander and defamation, and providing legal evidence when necessary. For this, we recommend the creation and/or expansion of compliance programs in private companies, observing local and extraterritorial legislation. |