Abstract
The spread of misinformation in social media has become a severe threat to public interests. For example, several incidents of public health concerns arose out of social media misinformation during the COVID-19 pandemic. Against the backdrop of the emerging IS research focus on social media and the impact of misinformation during recent events such as the COVID-19, Australian Bushfire, and the USA elections, we identified disaster, health, and politics as specific domains for a research review on social media misinformation. Following a systematic review process, we chose 28 articles, relevant to the three themes, for synthesis. We discuss the characteristics of misinformation in the three domains, the methodologies that have been used by researchers, and the theories used to study misinformation. We adapt an Antecedents-Misinformation-Outcomes (AMIO) framework for integrating key concepts from prior studies. Based on the AMIO framework, we further discuss the inter-relationships of concepts and the strategies to control the spread of misinformation on social media. Ours is one of the early reviews focusing on social media misinformation research, particularly on three socially sensitive domains; disaster, health, and politics. This review contributes to the emerging body of knowledge in Data Science and social media and informs strategies to combat social media misinformation.
Keywords: Misinformation, Information disorder, Social media, Systematic literature review
Introduction
Information disorder in social media
Rumors, misinformation, disinformation, and mal-information are common challenges confronting media of all types. It is, however, worse in the case of digital media, especially on social media platforms. Ease of access and use, speed of information diffusion, and difficulty in correcting false information make control of undesirable information a horrid task [1]. Alongside these challenges, social media has also been highly influential in spreading timely and useful information. For example, the recent #BlackLivesMatter movement was enabled by social media, which united concurring people's solidarity across the world when George Floyd was killed due to police brutality, and so are 2011 Arab spring in the Middle East and the 2017 #MeToo movement against sexual harassments and abuse [2, 3]. Although, scholars have addressed information disorder in social media, a synthesis of the insights from these studies are rare.
The information which is fake or misleading and spreads unintentionally is known as misinformation [4]. Prior research on misinformation in social media has highlighted various characteristics of misinformation and interventions thereof in different contexts. The issue of misinformation has become dominant with the rise of social media, attracting scholarly attention, particularly after the 2016 USA Presidential election, when misinformation apparently influenced the election results [5]. The word 'misinformation' was listed as one of the global risks by the World Economic Forum [6]. A similar term that is popular and confusing along with misinformation is 'disinformation'. It is defined as the information that is fake or misleading, and unlike misinformation, spreads intentionally. Disinformation campaigns are often seen in a political context where state actors create them for political gains. In India, during the initial stage of COVID-19, there was reportedly a surge in fake news linking the virus outbreak to a particular religious group. This disinformation spread gained media attention as it was widely shared on social media platforms. As a result of the targeting, it eventually translated into physical violence and discriminatory treatment against members of the community in some of the Indian states [7]. 'Rumors' and 'fake news' are similar terms related to misinformation. 'Rumors' are unverified information or statements circulated with uncertainty, and 'fake news' is the misinformation that is distributed in an official news format. Source ambiguity, personal involvement, confirmation bias, and social ties are some of the rumor-causing factors. Yet another related term, mal-information, is accurate information that is used in different contexts to spread hatred or abuse of a person or a particular group. Our review focuses on misinformation that is spread through social media platforms. The words 'rumor', and 'misinformation' are used interchangeably in this paper. Further, we identify factors that cause misinformation based on a systematic review of prior studies.
Ours is one of the early attempts to review social media research on misinformation. This review focuses on three sensitive domains of disaster, health, and politics, setting three objectives: (a) to analyze previous studies to understand the impact of misinformation on the three domains (b) to identify theoretical perspectives used to examine the spread of misinformation on social media and (c) to develop a framework to study key concepts and their inter-relationships emerging from prior studies. We identified these specific areas as the impact of misinformation with regards to both speed of spread and scale of influence are high and detrimental to the public and governments. To the best of our knowledge, the review of the literature on social media misinformation themes are relatively scanty. This review contributes to an emerging body of knowledge in Data Science and informs the efforts to combat social media misinformation. Data Science is an interdisciplinary area which incorporates different areas like statistics, management, and sociology to study the data and create knowledge out of data [8]. This review will also inform future studies that aim to evaluate and compare patterns of misinformation on sensitive themes of social relevance, such as disaster, health, and politics.
The paper is structured as follows. The first section introduces misinformation in social media context. In Sect. 2, we provide a brief overview of prior research works on misinformation and social media. Section 3 describes the research methodology, which includes details of the literature search and selection process. Section 4 discusses the analysis of spread of misinformation on social media based on three themes- disaster, health, and politics and the review findings. This includes current state of research, theoretical foundations, determinants of misinformation in social media platforms, and strategies to control the spread of misinformation. Section 5 concludes with the implications and limitations of the paper.
Social media and spread of misinformation
Misinformation arises in uncertain contexts when people are confronted with a scarcity of information they need. During unforeseen circumstances, the affected individual or community experiences nervousness or anxiety. Anxiety is one of the primary reasons behind the spread of misinformation. To overcome this tension, people tend to gather information from sources such as mainstream media and official government social media handles to verify the information they have received. When they fail to receive information from official sources, they collect related information from their peer circles or other informal sources, which would help them to control social tension [9]. Furthermore, in an emergency context, misinformation helps community members to reach a common understanding of the uncertain situation.
The echo chamber of social media
Social media has increasingly grown in power and influence and has acted as a medium to accelerate sociopolitical movements. Network effects enhance participation in social media platforms which in turn spread information (good or bad) at a faster pace compared to traditional media. Furthermore, due to a massive surge in online content consumption primarily through social media both business organizations and political parties have begun to share content that are ambiguous or fake to influence online users and their decisions for financial and political gains [9, 10]. On the other hand, people often approach social media with a hedonic mindset, which reduces their tendency to verify the information they receive [9]. Repetitive exposure to contents that coincides with their pre-existing beliefs, increases believability and shareability of content. This process known as the echo-chamber effect [11] is fueled by confirmation bias. Confirmation bias is the tendency of the person to support information that reinforces pre-existing beliefs and neglect opposing perspectives and viewpoints other than their own.
Platforms’ structure and algorithms also have an essential role in spreading misinformation. Tiwana et al. [12] have defined platform architecture as ‘a conceptual blueprint that describes how the ecosystem is partitioned into a relatively stable platform and a complementary set of modules that are encouraged to vary, and the design rules binding on both’. Business models of these platforms are based upon maximizing user engagement. For example, in the case of Facebook or Twitter, user feed is based on their existing belief or preferences. User feeds provide users with similar content that matches their existing beliefs, thus contributing to the echo chamber effect.
Platform architecture makes the transmission and retransmission of misinformation easier [12, 13]. For instance, WhatsApp has a one-touch forward option that enables users to forward messages simultaneously to multiple users. Earlier, a WhatsApp user could forward a message to 250 groups or users at a time, which as a measure for controlling the spread of misinformation was limited to five members in 2019. WhatsApp claimed that globally this restriction reduced message forwarding by 25% [14]. Apart from platform politics, users also have an essential role in creating or distributing misinformation. In a disaster context, people tend to share misinformation based on their subjective feeling [15].
Misinformation has the power to influence the decisions of its audience. It can change a citizen's approach toward a topic or a subject. The anti-vaccine movement on Twitter during the 2015 measles (highly communicable disease) outbreak in Disneyland, California, serves as a good example. The movement created conspiracy theories and mistrust on the State, which increased vaccine refusal rate [16]. Misinformation could even influence election of governments by manipulating citizens’ political attitudes as seen in the 2016 USA and 2017 French elections [17]. Of late, people rely heavily on Twitter and Facebook to collect the latest happenings from mainstream media [18].
Combating misinformation in social media has been a challenging task for governments in several countries. When social media influences elections [17] and health campaigns (like vaccination), governments and international agencies demand social media owners to take necessary actions to combat misinformation [13, 15]. Platforms began to regulate bots that were used to spread misinformation. Facebook announced the filtering of their algorithms to combat misinformation, down-ranking the post flagged by their fact-checkers which will reduce the popularity of the post or page. [17]. However, misinformation has become a complicated issue due to the growth of new users and the emergence of new social media platforms. Jang et al. [19] have suggested two approaches other than governmental regulation to control misinformation literary and corrective. The literary approach proposes educating users to increase their cognitive ability to differentiate misinformation from the information. The corrective approach provides more fact-checking facilities for users. Warnings would be provided against potentially fabricated content based on crowdsourcing. Both approaches have limitations; the literary approach attracted criticism as it transfers responsibility for the spread of misinformation to citizens. The corrective approach will only have a limited impact as the volume of fabricated content escalates [19–21].
An overview of the literature on misinformation reveals that most investigations focus on examining the methods to combat misinformation. Social media platforms are still discovering new tools and techniques to mitigate misinformation from their platforms, this calls for a research to understand their strategies.
Review method
This research followed a systematic literature review process. The study employed a structured approach based on Webster’s Guidelines [22] to identify relevant literature on the spread of misinformation. These guidelines helped in maintaining a quality standard while selecting the literature for review. The initial stage of the study involved exploring research papers from relevant databases to understand the volumes and availability of research articles. We extended the literature search to interdisciplinary databases too. We gathered articles from Web of Science, ACM digital library, AIS electronic library, EBSCO host business source premier, ScienceDirect, Scopus, and Springer link. Apart from this, a manual search was performed in Information Systems (IS) scholars' basket of journals [23] to ensure we did not miss any articles from these journals. We have also preferred articles that have Data Science and Information Systems background. The systematic review process began with keyword search using predefined keywords (Fig. 2). We identified related synonyms such as 'misinformation', 'rumors', 'spread', and 'social media' along with their combinations for the search process. The keyword search was on the title, abstract, and on the list of keywords. The literature search was conducted in the month of April 2020. Later, we revisited the literature in December 2021 to include latest publications from 2020 to 2021.
It was observed that scholarly discussion about ‘misinformation and social media’ began to appear in research after 2008. Later in 2010, the topic gained more attention when Twitter bots were used or spreading fake news on the replacement of a USA Senator [24]. Hate campaigns and fake follower activities were simultaneously growing during that period. As evident from Fig. 1, showing number of articles published between 2005 and 2021 on misinformation in three databases: Scopus, Springer, and EBSCO, academic engagement on misinformation seems to have gained more impetus after the 2016 US Presidential election, when social media platforms had apparently influenced the election [20].
As Data Science is an interdisciplinary field, the focus of our literature review goes beyond disciplinary boundaries. In particular, we focused on the three domains of disaster, health, and politics. This thematic focus of our review has two underlying reasons (a) the impact of misinformation through social media is sporadic and has the most damaging effects in these three domains and (b) our selection criteria in systematic review finally resulted in research papers that related to these three domains. This review has excluded platforms that are designed for professional and business users such as LinkedIn and Behance. A rational for the choice of these themes are discussed in the next section.
Inclusion–exclusion criteria
Figure 2 depicts the systematic review process followed in this study. In our preliminary search, 2148 records were retrieved from databases—all those articles were gathered onto a spreadsheet, which was manually cross-checked with the journals linked to the articles. Studies published during 2005–2021, studies published in English language, articles published from peer-reviewed journals, journals rating and papers relevant to misinformation were used as the inclusion criteria. We have excluded reviews, thesis, dissertations, and editorials; and articles on misinformation that are not akin to social media. To fetch the best from these articles, we selected articles that were from top journals, rated above three according to ABS rating and A*, A, and B according to ABDC rating. This process, while ensuring the quality of papers, also effectively shortened purview of study to 643 articles of acceptable quality. We have not performed track-back and track-forward on references. During this process, duplicate records were also identified and removed. Further screening of articles based on the title, abstract, and full text (wherever necessary)—brought down the number to 207 articles.
Further screening based on the three themes reduced the focus to 89 articles. We conducted a full-text analysis of these 89 articles. We further excluded articles that had not considered misinformation as a central theme and finally arrived at 28 articles for detailed review (Table 1).
Table 1.
Sl. no. | Year | Author | Theory | Theme | Platform and method | ||
---|---|---|---|---|---|---|---|
Disaster | Health | Politics | |||||
1 | 2013 | Oh et al. [9] | Rumor theory | ✓ | Twitter, text mining | ||
2 | 2014 | Liu et al. [13] | Rumor theory | ✓ | Twitter, text mining | ||
3 | 2018 | Jang et al. [19] | Nil | ✓ | Twitter, text mining | ||
4 | 2018 | Oh et al. [15] | Rumor theory | ✓ | Twitter & Facebook, survey method | ||
5 | 2017 | Abdullah et al. [25] | Nil | ✓ | Twitter, text mining | ||
6 | 2015 | Lee et al. [26] | Diffusion theory | ✓ | Twitter, text mining | ||
7 | 2018 | Mondal et al. [27] | Nil | ✓ | Twitter, text mining | ||
8 | 2016 | Chua et al. [28] | Third person effect | ✓ | Twitter, text mining | ||
9 | 2015 | Bode and Vraga [29] | Nil | ✓ | Facebook, experimental method | ||
10 | 2016 | Simon et al. [30] | Nil | ✓ | WhatsApp, survey method | ||
11 | 2018 | Ghenai and Mejova [31] | Nil | ✓ | Twitter, mixed method | ||
12 | 2017 | Chua and Banerjee [32] | Nil | ✓ | Twitter & Facebook, experimental method | ||
13 | 2017 | Kou et al. [33] | Nil | ✓ | Reddit, mixed method | ||
14 | 2019 | Gu and Hong [34] | Nil | ✓ | WeChat, experimental method | ||
15 | 2017 | Bode and Vraga [35] | Nil | ✓ | Facebook, experimental method | ||
16 | 2019 | Kim et al. [36] | Reputation theory | ✓ | Facebook, experimental method | ||
17 | 2018 | Chua and Banerjee [37] | Rumor theory | ✓ | Social media, experimental method | ||
18 | 2018 | Murungi et al. [38] | Rhetorical theory | ✓ | Social media, case study method | ||
19 | 2020 | Pennycook et al. [39] | Nil | ✓ | Facebook, experimental method | ||
20 | 2019 | Garrett and Poulsen [40] | Nil | ✓ | Facebook, experimental method | ||
21 | 2017 | Shin and Thorson [41] | Nil | ✓ | Twitter, text mining | ||
22 | 2019 | Kim and Dennis [42] | Nil | ✓ | Facebook, experimental method | ||
23 | 2017 | Hazel Kwon and Raghav Rao [43] | Rumor theory | ✓ | Social media, survey method | ||
24 | 2019 | Paek and Hove [44] | Situational crisis communication theory (SCCT) | ✓ | Social media, experimental method | ||
25 | 2019 | Moravec et al. [45] | Nil | ✓ | Facebook, experimental method | ||
26 | 2021 | Golshan Madraki et al. [46] | Nil | ✓ | Social Media, opportunistic sampling | ||
27 | 2020 | Shahi et al. [47] | Nil | ✓ | Twitter, exploratory study | ||
28 | 2021 | Jacqueline Otala et al. [48] | Nil | ✓ | Parler and Twitter, case study method |
The selected studies used a variety of research methods to examine the misinformation on social media. Experimentation and text mining of tweets emerged as the most frequent research methods; there were 11 studies that used experimental methods, and eight used Twitter data analyses. Apart from these, there were three survey methods, two mixed methods, and case study methods each, and one opportunistic sampling and exploratory study each. The selected literature for review includes nine articles on disaster, eight on healthcare, and eleven from politics. We preferred papers for review based on three major social media platforms; Twitter, Facebook, and WhatsApp. These are the three social media owners with the highest transmission rates and most active users [25] and most likely platforms for misinformation propagation.
Coding procedure
Initially both the authors have manually coded the articles individually by reading full text of each article and then identified the three themes; disaster, health, and politics. We used an inductive coding approach to derive codes from the data. The intercoder reliability rate between the authors were 82.1%. Disagreement among authors related to deciding in which theme few papers fall under were discussed and a resolution was arrived at. Later we used NVIVO, a qualitative data analysis software, to analyze unstructured data to encode and categorize the themes from the articles. The codes emerged from the articles were categorized into sub-themes and later attached to the main themes; disaster, health, and politics. NVIVO produced a rank list of codes based on frequency of occurrence (“Appendix”). An intercoder reliability check was completed for the data by an external research scholar having a different areas of expertise to ensure reliability. The coder agreed upon 26 articles out of 28 (92.8%), which indicated a high level intercoder reliability [49]. The independent researcher’s disagreement about the code for two authors was discussed between the authors and the research scholar and a consensus was arrived at.
Results
We initially reviewed articles separately from the categories of disaster, health, and politics. We first provide emergent issues that cut across these themes.
Social media misinformation research
Disaster, health, and politics emerged as the three domains (“Appendix”) where misinformation can cause severe harm, often leading to casualties or even irreversible effects. The mitigation of these effects can also demand substantial financial or human resources burden considering the scale of effect and risk of spreading negative information to the public altogether. All these areas are sensitive in nature. Further, disaster, health, and politics have gained the attention of researchers and governments as the challenges of misinformation confronting these domains are rampant. Besides sensitivity, misinformation in these areas has higher potential to exacerbate the existing crisis in society. During the 2020 Munich security conference, WHO’s Director-General noted: “We are not just fighting an epidemic; we are fighting an infodemic”, referring to the faster spread of COVID-19 misinformation than the virus [50].
More than 6000 people were hospitalized due to COVID-19 related misinformation in the first three months of 2020 [51]. As COVID-19 vaccination began, one of the popular myths was that Bill Gates wanted to use vaccines to embed microchips in people to track them and this created vaccine hesitancy among the citizens [52]. These reports show the severity of the spread of misinformation and how misinformation can aggravate a public health crisis.
Misinformation during disaster
In the context of emergency situations (unforeseen circumstances), the credibility of social media information has often been questioned [11]. When a crisis occurs, affected communities often experience a lack of localized information needed for them to make emergency decisions. This accelerates the spread of misinformation as people tend to fill this information gap with misinformation or 'improvised news' [9, 24, 25]. The broadcasting power of social media and re-sharing of misinformation could weaken and slow down rescue operations [24, 25]. As the local people have more access to the disaster area, they become immediate reporters of a crisis through social media. Mainstream media comes into picture only later. However, recent incidents reveals that voluntary reporting of this kind has begun to affect rescue operations negatively as it often acts as a collective rumor mill [9], which propagates misinformation. During the 2018 floods in the South-Indian state of Kerala a fake video on Mullaperiyar Dam leakage created unnecessary panic among the citizens, thus negatively impacting the rescue operations [53]. Information from mainstream media is relatively more reliable as they have traditional gatekeepers such as peer reviewers and editors who cross-check the information source before publication. Chua et al. [28] found that a major chunk of corrective tweets were retweeted from mainstream news media, thus mainstream media is considered as a preferred rumor correction channel, where they attempt to correct misinformation with the right information.
Characterizing disaster misinformation
Oh et al. [9] studied citizen-driven information processing based on three social crises using rumor theory. The main characteristic of a crisis is the complexity of information processing and sharing [9, 24]. A task is considered complex when characterized by increase in information load, information diversity or rate of information change [54]. Information overload and information dearth are the two grave concerns that interrupt the communication between the affected community and a rescue team. Information overload, where too many enquiries and fake news distract a response team, slows them down to recognize valid information [9, 27]. According to Balan and Mathew [55] information overload occurs when volume of information such as complexity of words and multiple languages that exceeds and cannot be processed by a human being. Here information dearth in our context is the lack of localized information that is supposed to help the affected community to make emergency decisions. When the official government communication channels or mainstream media cannot fulfill citizen's needs, they resort to information from their social media peers [9, 27, 29].
In a social crisis context, Tamotsu Shibutani [56] defines rumoring as collective sharing and exchange of information, which helps the community members to reach a common understanding about the crisis situation [30]. This mechanism works in social media, which creates information dearth and information overload. Anxiety, information ambiguity (source ambiguity and content ambiguity), personal involvement, and social ties are the rumor-causing variables in a crisis context [9, 27]. In general, anxiety is a negative feeling caused by distress or stressful situation, which fabricates or produces adverse outcomes [57]. In the context of a crisis or emergency, a community may experience anxiety in the absence of reliable information or in other cases when confronted with overload of information, making it difficult to take appropriate decisions. Under such circumstances, people may tend to rely on rumors as a primary source of information. The influence level of anxiety is higher during a community crisis than during a business crisis [9]. However, anxiety, as an attribute, varies based on the nature of platforms. For example, Oh et al. [9] found that the Twitter community do not fall into social pressure as like WhatsApp community [30]. Simon et al. [30] developed a model of rumor retransmission on social media and identified information ambiguity, anxiety and personal involvement as motives for rumormongering. Attractiveness is another rumor-causing variable. It occurs when aesthetically appealing visual aids or designs capture a receiver’s attention. Here believability matters more than the content’s reliability or the truth of the information received.
The second stage of the spread of misinformation is misinformation retransmission. Apart from the rumor-causing variables that are reported in Oh et al. [9], Liu et al. [13] found senders credibility and attractiveness as significant variables related to misinformation retransmission. Personal involvement and content ambiguity can also affect misinformation transmission [13]. Abdullah et al. [25] explored retweeter's motive on the Twitter platform to spread disaster information. Content relevance, early information [27, 31], trustworthiness of the content, emotional influence [30], retweet count, pro-social behavior (altruistic behavior among the citizens during the crisis), and the need to inform their circle are the factors that drive users’ retweet [25]. Lee et al. [26] have also examined the impact of Twitter features on message diffusion based on the 2013 Boston marathon tragedy. The study reported that during crisis events (especially during disasters), a tweet that has less reaction time (time between the crisis and initial tweet) and had higher impact than other tweets. This shows that to an extent, misinformation can be controlled if officials could communicate at the early stage of a crisis [27]. Liu et al. [13] showed that tweets with hashtags influence spread of misinformation. Further, Lee et al. [26] found that tweets with no hashtags had more influence due to contextual differences. For instance, usage of hashtags for marketing or advertising has a positive impact, while in the case of disaster or emergency situations, usage of hashtags (as in case of Twitter) has a negative impact. Messages with no hashtag get widely diffused when compared to messages with the hashtag [26].
Oh et al. [15] explored the behavioral aspects of social media participants that led to retransmission and spread of misinformation. They found that when people believe a threatening piece of misinformation they received, they are more likely to spread it, and they take necessary safety measures (sometimes even extreme actions). Repetition of the same misinformation from different sources also makes it more believable [28]. However, when they realize the received information was false they were less likely to share it with others [13, 26]. The characteristics of the platform used to deliver the misinformation also matters. For instance, numbers of likes and shares of the information increases the believability of the social media post [47].
In summary, we found that platform architecture also has an essential role in spreading and believability of misinformation. While conducting this systematic literature review, we observed that more studies on disaster and misinformation are based on the Twitter platform. The six papers out of nine that we reviewed on disaster area were based on the Twitter platform. When a message was delivered in video format, it had a higher impact compared to audio or text messages. If the message had a religious or cultural narrative, it led to behavioral action (danger control response) [15]. Users were more likely to spread misinformation through WhatsApp than Twitter. It was difficult to find the source of shared information on WhatsApp [30].
Misinformation related to healthcare
From our review, we found two systematic literature reviews that discusses health-related misinformation on social media. Yang et al. [58] explores the characteristics, impact and influences of health misinformation on social media. Wang et al. [59] addresses health misinformation related to vaccines and infectious diseases. This review shows that health-related misinformation, especially on M.M.R. vaccine and autism are largely spreading on social media and the government is unable to control it.
The spread of health misinformation is an emerging issue facing public health authorities. Health misinformation could delay proper treatment to patients, which could further add more casualties to the public health domain [28, 59, 60]. Often people tend to believe health-related information that is shared by their peers. Some of them tend to share their treatment experience or traditional remedies online. This information could be in a different context and may not be even accurate [33, 34]. Compared to health-related websites, the language used to detail the health information shared on social media will be simple and may not include essential details [35, 37]. Some studies reported that conspiracy theories and pseudoscience have escalated casualties [33]. Pseudoscience is the term referred to as the false claim, which pretends as if the shared misinformation has scientific evidence. The anti-vaccination movement on Twitter is one of the examples of pseudoscience [61]. Here the user might have shared the information due to the lack of scientific knowledge [35].
Characterizing healthcare misinformation
The attributes that characterize healthcare misinformation are distinctly different from other domains. Chua and Banerjee, [37] identified the characteristics of health misinformation as dread and wish. Dread is the rumor which creates more panic and unpleasant consequences. For example, in the wake of COVID-19, misinformation was widely shared on social media, which claimed that children 'died on the spot' after the mass COVID-19 vaccination program in Senegal, West Africa [61]. This message created panic among the citizens, as the misinformation was shared more than 7000 times on Facebook [61]. Wish is the type of rumor that gives hope to the receiver (e.g.,: rumor on free medicine distribution) [62]. Dread rumor looks more trustworthy and more likely to get viral. Dread rumor was the cause of violence against a minority group in India during COVID-19 [7]. Chua and Banerjee, [32] added pictorial and textual representations as the characteristics of health misinformation. The rumor that contains only text is textual rumor. Pictorial rumor on the other hand contains both text and images. However, Chua and Banerjee, [32] found that users prefer textual rumor than pictorial. Unlike rumors that are circulated during a natural disaster, health misinformation will be long-lasting, and it can spread cutting across boundaries. Personal involvement (the importance of information for both sender and receiver), rumor type and presence of counter rumor are some of the variables that can escalate users’ trusting and sharing behavior related to rumor [37]. The study of Madraki et al. [46] study on COVID-19 misinformation /disinformation reported that COVID-19 misinformation on social media differs significantly based on the languages, countries and their culture and beliefs. Acceptance of social media platforms as well as Governmental censorship also play an important role here.
Widespread misinformation could also change collective opinion [29]. Online users’ epistemic beliefs could control their sharing decisions. Chua and Banerjee, [32] argued that epistemologically naïve users (users who think knowledge can be acquired easily) are the type of users who accelerate the spread of misinformation on platforms. Those who read or share the misinformation are not likely to follow it [37]. Gu and Hong [34] examined health misinformation on mobile social media context. Mobile internet users are different from large screen users. The mobile phone user might have a more emotional attachment toward the gadget. It also motivates them to believe received misinformation. The corrective effort focused on large screen users may not work with mobile phone users or small screen users. Chua and Banerjee [32] suggested that simplified sharing options of platforms also motivate users to share the received misinformation before validating it. Shahi et al. [47] found that misinformation is also propagated or shared even by the verified Twitter handles. They become a part of misinformation transmission either by creating it or endorsing it by liking or sharing the information.
The focus of existing studies is heavily based on data from social networking sites such as Facebook and Twitter, although other platforms too escalate the spread of misinformation. Such a phenomenon was evident in the wake of COVID-19 as an intense trend of misinformation spread was reported on WhatsApp, TikTok, and Instagram.
Social media misinformation and politics
There have been several studies on the influence of misinformation on politics across the world [43, 44]. Political misinformation has been predominantly used to influence the voters. The USA Presidential election of 2016, French election of 2017 and Indian elections in 2019 have been reported as examples where misinformation has influenced election process [15, 17, 45]. During the 2016 USA election, the partisan effect was a key challenge, where false information was presented as if it was from an authorized source [39]. Based on a user's prior behavior on the platform, algorithms can manipulate the user's feed [40]. In a political context, fake news can create more harm as it can influence the voters and the public. Although, fake news has less ‘life’, it's consequences may not be short living. Verification of fake news takes time and by the time verification results are shared, fake news could achieve its goal [43, 48, 63].
Characterizing misinformation in politics
Confirmation bias has a dominant role in social media misinformation related to politics. Readers are more likely to read and engage with the information that confirms their preexisting beliefs and political affiliations and reject information that challenges it [46, 48]. For example, in the 2016 USA election, Pro-Trump fake news was accepted by Republicans [19]. Misinformation spreads quickly among people who have similar ideologies [19]. The nature of interface also could escalate the spread of misinformation. Kim and Dennis [36] investigated the influence of platforms' information presentation format and reported that social media platforms indirectly force users to accept certain information; they present information such that little importance is given to the source of information. This presentation is manipulative as people tend to believe information from a reputed source and are more likely to reject information that is from a less-known source [42].
Pennycook et al. [39], and Garrett and Poulsen [40] argued that warning tags (or flagging) on the headline can reduce the spread of misinformation. However, it is not practical to assign warning tags to all misinformation as it gets generated faster than valid information. The fact-checking process in social media also takes time. Hence, people tend to believe that the headlines which do not have warning tags are true and the idea of warning tags will thus not serve any purpose [39]. Furthermore, it could increase the reader's belief in warning tags and lead to misperception [39]. Readers tend to believe that all information is verified and consider untagged false information as more accurate. This phenomenon is known as the implied truth effect [39]. In this case, source reputation rating will influence the credibility of the information. The reader gives less importance to the source that has a low rating [17, 50].
Theoretical perspectives of social media misinformation
We identified six theories among the articles we reviewed in relation to social media misinformation. We found rumor theory was used most frequently among all the studies chosen for our review; the theory was used in four articles as a theoretical foundation [9, 11, 13, 37, 43]. Oh et al. [9], studied citizen-driven information processing on Twitter using rumor theory in three social crises. This paper identified four key variables (source ambiguity, personal involvement, and anxiety) that spread misinformation. The authors further examined the acceptance of hate rumors and the aftermath of community crisis based on the Bangalore mass exodus of 2012. Liu et al. [13], examined the reason behind the retransmission of messages using rumor theory in disasters. Hazel Kwon and Raghav Rao [43] investigated how internet surveillance by the government impacts citizens’ involvement with cyber-rumors during a homeland security threat. Diffusion theory has also been used in IS research to discern the adoption of technological innovation. Researchers have used diffusion theory to study the retweeting behavior among Twitter users (tweet diffusion) during extreme events [26]. This research investigated information diffusion during extreme events based on four major elements of diffusion: innovation, time, communication channels and social systems. Kim et al. [36] examined the effect of rating news sources on users’ belief in social media articles based on three different rating mechanisms expert rating, user article rating and user source rating. Reputation theory was used to show how users would discern cognitive biases in expert ratings.
Murungi et al. [38] used rhetorical theory to argue that fact-checkers have less effectiveness on fake news that spreads on social media platforms. The study proposed a different approaches by focusing on underlying belief structure that accepts misinformation. The theory was used to identify fake news and socially constructed beliefs in the context of Alabama’s senatorial election in 2017. Using third person effect as the theoretical ground, the characteristics of rumor corrections on Twitter platform have also been examined in the context of death hoax of Singapore’s first prime minister Lee Kuan Yew [28]. This paper explored the motives behind collective rumor and identified the key characteristics of collective rumor correction. Using situational crisis communication theory (SCCT), Paek and Hove [44] examined how government could effectively respond to risk-related rumors during national-level crises in the context of food safety rumor. Refuting rumor, denying it and attacking the source of rumor are the three rumor response strategies suggested by the authors to counter rumor-mongering (Table 2).
Table 2.
Theory | Description | References | Theme |
---|---|---|---|
Rumor theory | “A collective and collaborative transaction in which community members offer, evaluate, and interpret information to reach a common understanding of uncertain situations, to alleviate social tension, and to solve collective crisis problems” [9] | [9, 13, 15, 37, 43] | Disaster, health |
Diffusion theory | In IS research diffusion theory has been used to discern the adoption of technological innovation. Diffusion theory involves “the process by which an innovation is communicated through certain channels over time among the members of a social system.” | [26] | Disaster |
Reputation theory | Reputation is defined as a three-dimensional construct comprising the types of functional, social and expressive reputation [36] | [36] | Politics |
Rhetorical theory | Rhetorical theory is “a way of framing an experience or event—an effort to understand and account for something and the way it functions in the world” [64] | [38] | Politics |
Third person effect | Theory of “third-person effect describes an individual’s belief that other people (i.e., the third person), not oneself, are more susceptible to the negative persuasion of the media. The individual is consequently motivated to react out of concern for others [28] | [28] | Disaster |
Situational crisis communication theory (SCCT) | SCCT comprise three elements: “(1) the crisis situation, (2) crisis response strategies, and (3) a system for matching the crisis situation and crisis response strategies. The theory states that effectiveness of communication strategies is dependent on characteristics of the crisis situation.” [65] | [44] | Disaster |
Determinants of misinformation in social media platforms
Figure 3 depicts the concepts that emerged from our review using a framework of Antecedents-Misinformation-Outcomes (AMIO) framework, an approach we adapt from Smith HJ et al. [66]. Originally developed to study information privacy, the Antecedent-Privacy-Concerns-Outcomes (APCO) framework provided a nomological canvas to present determinants, mediators and outcome variables pertaining to information privacy. Following this canvas, we discuss the antecedents of misinformation, mediators of misinformation and misinformation outcomes, as they emerged from prior studies (Fig. 3).
Anxiety, source ambiguity, trustworthiness, content ambiguity, personal involvement, social ties, confirmation bias, attractiveness, illiteracy, ease of sharing options and device attachment emerged as the variables determining misinformation in social media.
Anxiety is the emotional feeling of the person who sends or receives the information. If the person is anxious about the information received, he or she is more likely to share or spread misinformation [9]. Source ambiguity deals with the origin of the message. When the person is convinced of the source of information, it increases his trustworthiness and the person shares it. Content ambiguity addresses the content clarity of the information [9, 13]. Personal involvement denotes how much the information is important for both the sender and receiver [9]. Social ties, information shared by a family member or social peers will influence the person to share the information [9, 13]. From prior literature, it is understood that confirmation bias is one of the root causes of political misinformation. Research on attractiveness of the received information reveals that users tend to believe and share the information that is received on her or his personal device [34]. After receiving the misinformation from various sources, users accept it based on their existing beliefs, and social, cognitive factors and political factors. Oh et al. [15] observed that during crises, people by default have a tendency to believe unverified information especially when it helps them to make sense of the situation. Misinformation has significant effects on individuals and society. Loss of lives [9, 15, 28, 30], economic loss [9, 44], loss of health [32, 35] and loss of reputation [38, 43] are the major outcome of misinformation emerged from our review.
Strategies for controlling the spread of misinformation
Discourse on social media misinformation mitigation has resulted in prioritization of strategies such as early communication from the officials and use of scientific evidence [9, 35]. When people realize that the received information or message is false, they are less likely to share that information with others [15]. Other strategies are 'rumor refutation—reducing citizens' intention to spread misinformation by real information which reduces their uncertainty and serves to control misinformation [44]. Rumor correction models for social media platforms also employ algorithms and crowdsourcing [28]. Majority of the papers that we have reviewed suggested fact-checking by experts, source rating of the received information, attaching warning tags to the headlines or entire news [36], and flagging content by the platform owners [40] as the strategies to control the spread of misinformation. Studies on controlling misinformation in the public health context showed that the government could also seek the help of public health professionals to mitigate misinformation [31].
However, the aforementioned strategies have been criticized for several limitations. Most papers mentioned confirmation bias as having a significant impact on the misinformation mitigation strategies, especially in the political context where people tend to believe the information that matches their prior belief. Garrett and Poulsen [40] argued that during an emergency situation, misinformation recipient may not be able to characterize the misinformation as true or false. Thus, providing alternative explanation or the real information to the users have more effect than providing fact-checking report. Studies by Garrett and Poulsen [40], and Pennycook et al. [39] reveal a drawback of attaching warning tags to news headlines. Once the flagging or tagging of the information is introduced, the information with the absence of tags will be considered as true or reliable information. This creates an implied truth effect. Further, it is also not always practical to evaluate all social media posts. Similarly, Kim and Dennis [36] studied fake news flagging and found that fake news flags did not influence users’ belief. However, they created cognitive dissonance and users were in search of the truthfulness of the headline. Later in 2017 Facebook discontinued the fake news flagging service owing to its limitations [45]
Key research gaps and future directions
Although, misinformation is a multi-sectoral issue, our systematic review observed that interdisciplinary research on social media misinformation is relatively scarce. ‘Confirmation bias’ is one of the most significant behavioral problem that motivates the spread of misinformation. However, lack of research on it reveals the scope for future interdisciplinary research across the fields of Data Science, Information Systems and Psychology in domains such as politics and health care. In the disaster context, there is a scope for study on the behavior of a first respondent and an emergency manager to understand their information exchange pattern with the public. Similarly, future researchers could analyze communication patterns between citizens and frontline workers in the public health context, which may be useful to design counter-misinformation campaigns and awareness interventions. Since information disorder is a multi-sectoral issue, researchers need to understand misinformation patterns among multiple government departments for coordinated counter-misinformation intervention.
There is a further dearth of studies on institutional responses to control misinformation. To fill the gap, future studies could concentrate on the analysis of governmental and organizational interventions to control misinformation at the level of policies, regulatory mechanisms, and communication strategies. For example, in India there is no specific law against misinformation but there are some provisions in the Information Technology Act (IT Act) and Disaster Management Act which can control misinformation and disinformation. An example of awareness intervention is an initiative named ‘Satyameva Jayate’ launched in Kannur district of Kerala, India which focused on sensitizing children at school to spot misinformation [67]. As noted earlier, within the research on Misinformation in the political context, there is a lack of research on strategies adopted by the state to counter misinformation. Therefore, building on cases like 'Satyameva Jayate' would further contribute to knowledge in this area.
Technology-based strategies adopted by social media to control the spread of misinformation emphasize the corrective algorithms, keywords and hashtags as a solution [32, 37, 43]. However, these corrective measures have their own limitations. Misinformation corrective algorithms are ineffective if not used immediately after the misinformation has been created. Related hashtags and keywords are used by researchers to find content shared on social media platforms to retrieve data. However, it may not be possible for researchers to cover all the keywords or hashtags employed by users. Further, algorithms may not decipher content shared in regional languages. Another limitation of algorithms employed by platforms is that they recommend and often display content based on user activities and interests which limits the users access to information from multiple perspectives, thus reinforcing their existing belief [29]. A reparative measure is to display corrective information as 'related stories' for misinformation. However, Facebook’s related stories algorithm only activates when an individual clicks on an outside link, which limits the number of people who will see the corrective information through the algorithm which turns out to be a challenge. Future research could investigate the impact of related stories as a corrective measure by analyzing the relation between misinformation and frequency of related stories posted vis a vis real information.
Our review also found a scarcity of research on the spread of misinformation on certain social media platforms while studies being skewed toward a few others. Of the studies reviewed, 15 articles were concentrated on misinformation spread on Twitter and Facebook. Although, from recent news reports it is evident that largely misinformation and disinformation are spread through popular messaging platforms like the 'WhatsApp', ‘Telegram’, ‘WeChat’, and ‘Line’, research using data from these platforms are, however, scanty. Especially in the Indian context, the magnitude of problems arising from misinformation through WhatsApp are overwhelming [68]. To address the lacunae of research on messaging platforms, we suggest future researchers to concentrate on investigating the patterns of misinformation spreading on platforms like WhatsApp. Moreover, message diffusion patterns are unique to each social media platform; therefore, it is useful to study the misinformation diffusion patterns on different social media platforms. Future studies could also address the differential roles, patterns and intensity of the spread of misinformation on various messaging and photo/ video-sharing social networking services.
Evident from our review, most research on misinformation is based on Euro-American context and the dominant models proposed for controlling misinformation may have limited applicability to other regions. Moreover, the popularity of social media platforms and usage patterns are diverse across the globe consequent to cultural differences and political regimes of the region, therefore necessitating researchers of social media to take cognizance of empirical experiences of ' left-over' regions.
Conclusion
To understand the spread of misinformation on social media platforms, we conducted a systematic literature review in three important domains where misinformation is rampant: disaster, health, and politics. We reviewed 28 articles relevant to the themes chosen for the study. This is one of the earliest reviews focusing on social media misinformation research, especially based on three sensitive domains. We have discussed how misinformation spreads in the three sectors, the methodologies that have been used by researchers, theoretical perspectives, Antecedents-Misinformation-Outcomes (AMIO) framework for understanding key concepts and their inter-relationships, and strategies to control the spread of misinformation.
Our review also identified major gaps in IS research on misinformation in social media. This includes the need for methodological innovations in addition to experimental methods which have been widely used. This study has some limitations that we acknowledge. We might not have identified all relevant papers on spread of misinformation on social media from existing literature as some authors might have used different keywords and also due to our strict inclusion and exclusion criteria. There might also have been relevant publications in languages other than English which were not covered in this review. Our focus on three domains also restricted the number of papers we reviewed.
Appendix
Code | Sub themes | Frequency | Themes |
---|---|---|---|
Social crisis situations | Situations | 43 | Disaster |
Uncertain situations | |||
Real community crisis situations | |||
Post-disaster situation | |||
Crisis situations | |||
Ambiguous situations | |||
Unpredictable crisis situations | |||
Uncertain crisis situations | |||
Emergency situations | |||
Disaster situations | |||
Emergency crisis communication | Crisis | 36 | |
Unexpected crisis events | |||
Crisis scenario | |||
Crisis management | |||
Addressing health misinformation dissemination | Health | 77 | Health |
Global health misinformation | |||
Online health misinformation | |||
Health communication | |||
Public health | |||
Health pandemic | |||
Health-related conspiracy theories | Conspiracy | 33 | |
Anti-government rumors | Rumor | 44 | Politics |
Political headlines | Headlines | 30 | |
Political situations | Situations | 25 | |
National threat situations | |||
Homeland threat situations | |||
Military conflict situations |
Author contributions
TMS: Conceptualization, Methodology, Investigation, Writing—Original Draft, SKM: Writing—Review & Editing, Supervision.
Funding
This research did not receive any specific Grant from funding agencies in the public, commercial, or not-for-profit sectors.
Declarations
Conflict of interest
On behalf of two authors, the corresponding author states that there is no conflict of interest in this research paper.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Sadiq Muhammed T, Email: sadique.sadiquet@gmail.com.
Saji K. Mathew, Email: saji@iitm.ac.in
References
- 1.Thai MT, Wu W, Xiong H. Big Data in Complex and Social Networks. 1. Boca Raton: CRC Press; 2017. [Google Scholar]
- 2.Peters, B.: How Social Media is Changing Law and Policy. Fair Observer (2020)
- 3.Granillo, G.: The Role of Social Media in Social Movements. Portland Monthly (2020)
- 4.Wu L, Morstatter F, Carley KM, Liu H. Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explor. 2019;21(1):80–90. doi: 10.1145/3373464.3373475. [DOI] [Google Scholar]
- 5.Sam, L.: Mark Zuckerberg: I regret ridiculing fears over Facebook’s effect on election|technology|the guardian. Theguardian (2017)
- 6.WEF: Global Risks 2013—World Economic Forum (2013)
- 7.Scroll: Communalisation of Tablighi Jamaat Event. Scroll.in (2020)
- 8.Cao L. Data science: a comprehensive overview. ACM Comput. Surv. 2017;50(43):1–42. [Google Scholar]
- 9.Oh O, Agrawal M, Rao HR. Community intelligence and social media services: a rumor theoretic analysis of tweets during social crises. MIS Q. Manag. Inf. Syst. 2013;37(2):407–426. doi: 10.25300/MISQ/2013/37.2.05. [DOI] [Google Scholar]
- 10.Mukherjee, A., Liu, B., Glance, N.: Spotting fake reviewer groups in consumer reviews. In: WWW’12—Proceedings of the 21st Annual Conference on World Wide Web, pp. 191–200 (2012)
- 11.Cerf VG. Information and misinformation on the internet. Commun. ACM. 2016;60(1):9–9. doi: 10.1145/3018809. [DOI] [Google Scholar]
- 12.Tiwana A, Konsynski B, Bush AA. Platform evolution: coevolution of platform architecture, governance, and environmental dynamics. Inf. Syst. Res. 2010;21(4):675–687. doi: 10.1287/isre.1100.0323. [DOI] [Google Scholar]
- 13.Liu, F., Burton-Jones, A., Xu, D.: Rumors on social media in disasters: extending transmission to retransmission. PACIS 2014
- 14.Hern, A.: WhatsApp to impose new limit on forwarding to fight fake news. The Guardian (2020)
- 15.Oh O, Gupta P, Agrawal M, Raghav Rao H. ICT mediated rumor beliefs and resulting user actions during a community crisis. Gov. Inf. Q. 2018;35(2):243–258. doi: 10.1016/j.giq.2018.03.006. [DOI] [Google Scholar]
- 16.Xiaoyi, A.T.C.Y.: Examining online vaccination discussion and communities in Twitter. In: SMSociety ’18: Proceedings of the 9th International Conference on Social Media and Society (2018)
- 17.Lazer, D.M.J., et al.: The science of fake news. Sciencemag.org (2018)
- 18.Peter, S.: More Americans are getting their news from social media. forbes.com (2019)
- 19.Jang SM, et al. A computational approach for examining the roots and spreading patterns of fake news: evolution tree analysis. Comput. Hum. Behav. 2018;84:103–113. doi: 10.1016/j.chb.2018.02.032. [DOI] [Google Scholar]
- 20.Mele, N., et al.: Combating Fake News: An Agenda for Research and Action (2017)
- 21.Bernhard U, Dohle M. Corrective or confirmative actions? Political online participation as a consequence of presumed media influences in election campaigns. J. Inf. Technol. Polit. 2015;12(3):285–302. doi: 10.1080/19331681.2015.1048918. [DOI] [Google Scholar]
- 22.Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature review. MIS Q. 26(2) (2002)
- 23.aisnet.org: Senior Scholars’ Basket of Journals|AIS. aisnet.org. [Online]. Available: https://aisnet.org/page/SeniorScholarBasket. Accessed: 16 Sept 2021
- 24.Torres RR, Gerhart N, Negahban A. Epistemology in the era of fake news: an exploration of information verification behaviors among social networking site users. ACM. 2018;49:78–97. [Google Scholar]
- 25.Abdullah, N.A., Nishioka, D., Tanaka, Y., Murayama, Y.: Why I retweet? Exploring user’s perspective on decision-making of information spreading during disasters. In: Proceedings of the 50th Hawaii International Conference on System Sciences (2017)
- 26.Lee J, Agrawal M, Rao HR. Message diffusion through social network service: the case of rumor and non-rumor related tweets during Boston bombing 2013. Inf. Syst. Front. 2015;17(5):997–1005. doi: 10.1007/s10796-015-9568-z. [DOI] [Google Scholar]
- 27.Mondal T, Pramanik P, Bhattacharya I, Boral N, Ghosh S. Analysis and early detection of rumors in a post disaster scenario. Inf. Syst. Front. 2018;20(5):961–979. doi: 10.1007/s10796-018-9837-8. [DOI] [Google Scholar]
- 28.Chua, A.Y.K., Cheah, S.-M., Goh, D.H., Lim, E.-P.: Collective rumor correction on the death hoax. In: PACIS 2016 Proceedings (2016)
- 29.Bode L, Vraga EK. In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J. Commun. 2015;65(4):619–638. doi: 10.1111/jcom.12166. [DOI] [Google Scholar]
- 30.Simon T, Goldberg A, Leykin D, Adini B. Kidnapping WhatsApp—rumors during the search and rescue operation of three kidnapped youth. Comput. Hum. Behav. 2016;64:183–190. doi: 10.1016/j.chb.2016.06.058. [DOI] [Google Scholar]
- 31.Ghenai, A., Mejova, Y.: Fake cures: user-centric modeling of health misinformation in social media. In: Proceedings of ACM Human–Computer Interaction, vol. 2, no. CSCW, pp. 1–20 (2018)
- 32.Chua AYK, Banerjee S. To share or not to share: the role of epistemic belief in online health rumors. Int. J. Med. Inf. 2017;108:36–41. doi: 10.1016/j.ijmedinf.2017.08.010. [DOI] [PubMed] [Google Scholar]
- 33.Kou, Y., Gui, X., Chen, Y., Pine, K.H.: Conspiracy talk on social media: collective sensemaking during a public health crisis. In: Proceedings of ACM Human–Computer Interaction, vol. 1, no. CSCW, pp. 1–21 (2017)
- 34.Gu, R., Hong, Y.K.: Addressing health misinformation dissemination on mobile social media. In: ICIS 2019 Proceedings (2019)
- 35.Bode L, Vraga EK. See something, say something: correction of global health misinformation on social media. Health Commun. 2018;33(9):1131–1140. doi: 10.1080/10410236.2017.1331312. [DOI] [PubMed] [Google Scholar]
- 36.Kim A, Moravec PL, Dennis AR. Combating fake news on social media with source ratings: the effects of user and expert reputation ratings. J. Manag. Inf. Syst. 2019;36(3):931–968. doi: 10.1080/07421222.2019.1628921. [DOI] [Google Scholar]
- 37.Chua AYK, Banerjee S. Intentions to trust and share online health rumors: an experiment with medical professionals. Comput. Hum. Behav. 2018;87:1–9. doi: 10.1016/j.chb.2018.05.021. [DOI] [Google Scholar]
- 38.Murungi, D., Purao, S., Yates, D.: Beyond facts: a new spin on fake news in the age of social media. In: AMCIS 2018 Proceedings (2018)
- 39.Pennycook G, Bear A, Collins ET, Rand DG. The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag. Sci. 2020 doi: 10.1287/mnsc.2019.3478. [DOI] [Google Scholar]
- 40.Garrett R, Poulsen S. Flagging Facebook falsehoods: self identified humor warnings outperform fact checker and peer warnings. J. Comput. Commun. 2019 doi: 10.1093/jcmc/zmz012. [DOI] [Google Scholar]
- 41.Shin J, Thorson K. Partisan selective sharing: the biased diffusion of fact-checking messages on social media. J. Commun. 2017;67(2):233–255. doi: 10.1111/jcom.12284. [DOI] [Google Scholar]
- 42.Kim A, Dennis AR. Says who? The effects of presentation format and source rating on fake news in social media. MIS Q. 2019 doi: 10.25300/MISQ/2019/15188. [DOI] [Google Scholar]
- 43.Hazel Kwon K, Raghav Rao H. Cyber-rumor sharing under a homeland security threat in the context of government Internet surveillance: the case of South–North Korea conflict. Gov. Inf. Q. 2017;34(2):307–316. doi: 10.1016/j.giq.2017.04.002. [DOI] [Google Scholar]
- 44.Paek HJ, Hove T. Effective strategies for responding to rumors about risks: the case of radiation-contaminated food in South Korea. Public Relat. Rev. 2019;45(3):101762. doi: 10.1016/j.pubrev.2019.02.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Moravec PL, Minas RK, Dennis AR. Fake news on social media: people believe what they want to believe when it makes no sense at all. MIS Q. 2019 doi: 10.25300/MISQ/2019/15505. [DOI] [Google Scholar]
- 46.Madraki et al.: Characterizing and comparing COVID-19 misinformation across languages, countries and platforms. In: WWW ’21 Companion Proceedings of Web Conference (2021)
- 47.Shahi GK, Dirkson A, Majchrzak TA. An exploratory study of COVID-19 misinformation on Twitter. Public Heal. Emerg. COVID-19 Initiat. 2021;22:100104. doi: 10.1016/j.osnem.2020.100104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Otala, M., et al.: Political polarization and platform migration: a study of Parler and Twitter usage by United States of America Congress Members. In: WWW ’21 Companion Proceedings of Web Conference (2021)
- 49.Paul LJ. Encyclopedia of Survey Research Methods. Thousand Oaks: Sage Research Methods; 2008. [Google Scholar]
- 50.WHO Munich Security Conference: WHO.int. [Online]. Available: https://www.who.int/director-general/speeches/detail/munich-security-conference. Accessed 24 Sept 2021
- 51.Coleman, A.: Hundreds dead’ because of Covid-19 misinformation—BBC News. BBC News (2020)
- 52.Benenson, E.: Vaccine myths Facts vs fiction|VCU Health. vcuhealth.org, 2021. [Online]. Available: https://www.vcuhealth.org/news/covid-19/vaccine-myths-facts-vs-fiction. Accessed 24 Sept 2021
- 53.Pierpoint, G.: Kerala floods: fake news ‘creating unnecessary panic’—BBC News. BBC (2018)
- 54.Campbell DJ. Task complexity: a review and analysis. Acad. Manag. Rev. 1988;13(1):40. doi: 10.2307/258353. [DOI] [Google Scholar]
- 55.Balan MU, Mathew SK. Personalize, summarize or let them read? A study on online word of mouth strategies and consumer decision process. Inf. Syst. Front. 2020;23:1–21. [Google Scholar]
- 56.Shibutani T. Improvised News: A Sociological Study of Rumor. Indianapolis: The Bobbs-Merrill Company Inc; 1966. [Google Scholar]
- 57.Pezzo MV, Beckstead JW. A multilevel analysis of rumor transmission: effects of anxiety and belief in two field experiments Basic Appl. Soc. Psychol. 2006 doi: 10.1207/s15324834basp2801_8. [DOI] [Google Scholar]
- 58.Li, Y.-J., Cheung, C.M.K. Shen, X.-L., Lee, M.K.O.: Health misinformation on social media: a literature review. In: Association for Information Systems (2019)
- 59.Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc. Sci. Med. 2019;240:112552. doi: 10.1016/j.socscimed.2019.112552. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Pappa D, Stergioulas LK. Harnessing social media data for pharmacovigilance: a review of current state of the art, challenges and future directions. Int. J. Data Sci. Anal. 2019;8(2):113–135. doi: 10.1007/s41060-019-00175-3. [DOI] [Google Scholar]
- 61.BBC: Fighting Covid-19 fake news in Africa. BBC News (2020)
- 62.Chua, A.Y.K., Aricat, R., Goh, D.: Message content in the life of rumors: comparing three rumor types. In: 2017 12th International Conference on Digital Information Management, ICDIM 2017, vol. 2018, pp. 263–268
- 63.Lee AR, Son S-M, Kim KK. Information and communication technology overload and social networking service fatigue: a stress perspective. Comput. Hum. Behav. 2016;55:51–61. doi: 10.1016/j.chb.2015.08.011. [DOI] [Google Scholar]
- 64.Foss K, Foss S, Griffin C. Feminist rhetorical theories. 1999 doi: 10.1080/07491409.2000.10162571. [DOI] [Google Scholar]
- 65.Coombs, W., Holladay, S.J.: Reasoned action in crisis communication: an attribution theory-based approach to crisis management. Responding to Cris. A Rhetor. approach to Cris. Commun. (2004)
- 66.Smith HJ, Dinev T, Xu H. Information privacy research: an interdisciplinary review. MIS Q. 2011;35:989–1015. doi: 10.2307/41409970. [DOI] [Google Scholar]
- 67.Ammu, C.: Kerala: Kannur district teaches school kids to spot fake news—the week. theweek.in (2018)
- 68.Ponniah, K.: WhatsApp: the ‘black hole’ of fake news in India’s election. BBC News (2019)