Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2023 Apr 17:1–26. Online ahead of print. doi: 10.1007/s10796-023-10390-w

The Polarizing Impact of Political Disinformation and Hate Speech: A Cross-country Configural Narrative

Pramukh Nanjundaswamy Vasist 1, Debashis Chatterjee 2, Satish Krishnan 1,
PMCID: PMC10106894  PMID: 37361884

Abstract

Information and communication technologies hold immense potential to enhance our lives and societal well-being. However, digital spaces have also emerged as a fertile ground for fake news campaigns and hate speech, aggravating polarization and posing a threat to societal harmony. Despite the fact that this dark side is acknowledged in the literature, the complexity of polarization as a phenomenon coupled with the socio-technical nature of fake news necessitates a novel approach to unravel its intricacies. In light of this sophistication, the current study employs complexity theory and a configurational approach to investigate the impact of diverse disinformation campaigns and hate speech in polarizing societies across 177 countries through a cross-country investigation. The results demonstrate the definitive role of disinformation and hate speech in polarizing societies. The findings also offer a balanced perspective on internet censorship and social media monitoring as necessary evils to combat the disinformation menace and control polarization, but suggest that such efforts may lend support to a milieu of hate speech that fuels polarization. Implications for theory and practice are discussed.

Keywords: Polarization, Disinformation, Fake news, Hate speech, Internet censorship, Social media monitoring

Introduction

Information and communication technologies (ICT) hold immense potential for people and societal advancement in a multitude of avenues (Bentley et al., 2019; Parthasarathy & Ramamritham, 2009; Saha et al., 2022). However, on the flipside, ICTs also constitute a threat to society (Ahmed et al., 2022), and digital spaces have been accused of providing a fertile ground for fake news campaigns and online hate speech (Aanestad et al., 2021; Castaño-Pulgarín et al., 2021), while exacerbating societal polarization (Törnberg, 2022). Polarization is a social phenomenon characterized by the fragmentation of society into antagonistic factions with vehemently opposed values and identities that impede cooperation and the pursuit of a common good (Stewart et al., 2020). The threat is significant, and democracies globally are under siege and reeling from the impact of polarized societies. According to the latest World Economic Forum study on global risks, societal polarization is not just one of the world’s top 10 present concerns, but also a long-term threat over the next decade (Zahidi, 2023). The phenomenon has impeded pandemic response in countries (Stuenkel, 2021), slowed consensus on critical global issues such as climate change (Sparkman et al., 2022), and continues to challenge the resilience of societies, resulting in a catastrophic loss of diversity in the social fabric (Kelly, 2021). Not only does polarization increase the likelihood of violence in societies (Piazza, 2022), but violent demonstrations may exacerbate polarization and divide individuals along partisan lines (De Salle, 2020). These devastating effects of polarization have catapulted it to a top priority for scholars across disciplines seeking an understanding of the causes of this social evil (Stewart et al., 2020).

Information technology and social media, in particular, play a crucial role in fostering polarization (Arora et al., 2022; Qureshi et al., 2022), with a spillover effect on offline violence and protests (Gallacher et al., 2021). These platforms are not merely spaces of public discourse, but also raise concerns over free speech, particularly when they contribute to the online amplification of the most radical and polarizing content (Riemer & Peter, 2021). Debates on free speech on platforms predominantly revolve around disinformation and hate speech (Riemer & Peter, 2021), both of which are considered to exacerbate polarization (Meneguelli & Ferré-Pavia, 2021; WEF, 2023). While governments and platforms pursue content moderation and censorship to combat this menace of fake news and hate speech (Riemer & Peter, 2021; Vese, 2022), such practices are regarded as a threat to free speech (Vese, 2022) and contribute to polarization (Frasz, 2022).

The role of fake news and hate speech in polarizing societies is widely acknowledged in prior literature (Au et al., 2021; Piazza, 2020). Its crippling effects are now accompanied by rising calls for action on online disinformation, which threatens to polarize society and destroy democratic life (The Express Tribune, 2022). Fake news as a term gained prominence during the U.S. presidential elections of 2016 (Allcott & Gentzkow, 2017). Ever since, the issue has persisted and poses a grave threat to the societal fabric (Olan et al., 2022). The term overlaps with other forms of information disorder such as misinformation and disinformation (Lazer et al., 2018). While the former refers to false or misleading information (Lazer et al., 2018), the latter is a deliberate fabrication and dissemination of falsehood with an intent to deceive (Zimdars & Mcleod, 2020). In contrast to falsehood, hate speech refers to the use of offensive language directed against a specific group of individuals who share a common characteristic (Castaño-Pulgarín et al., 2021).

Politics remains a favored breeding ground for disinformation and hate speech (Freelon & Wells, 2020; Piazza, 2020). While social media and online platforms have played a pivotal role in amplifying the spread of disinformation and hate speech (Meske & Bunde, 2022; Olan et al., 2022), recent reports reveal the rampant misuse of these platforms by governments and political parties in over eighty countries engaged in political disinformation campaigns (Alba & Satariano, 2019; Oxford Internet Institute, 2021). Likewise there are global concerns over hate speech “super-charged by the internet” in allowing falsehood and conspiracy theories to proliferate and provoke offline violence (United Nations, 2023).

While social networking sites (SNSes) have started to ramp up efforts to detect and curb this menace (Hatmaker, 2022; O’Regan & Theil, 2020), governments have also stepped up measures to actively monitor social media using sophisticated technology that enables them to monitor citizens’ behaviors on these platforms (Shahbaz & Funk, 2019). Some governments have pushed the envelope further by imposing censorship on the internet (Ovide, 2022). Such measures challenge the freedom of expression online (González-Quiñones & Machin-Mastromatteo, 2019) and contribute to polarization and political animosity among citizens (Frasz, 2022).

In light of the fact that the discourse on free speech online is embroiled in a conflict with surveillance and censorship efforts, even as disinformation and hate speech proliferate on these platforms, we assert that these core factors in the free speech debate merit exploration regarding their combinatorial effect on polarization as a phenomenon. This significance stems from the fact that extant literature has predominantly engaged in linear investigations of polarization as a phenomenon. In this context, several studies have analyzed the relationship between polarization and factors such as internet and social media usage (Boxell et al., 2017), democratic erosion and media censorship (Arbatli & Rosenberg, 2021), media exposure (Hawdon et al., 2020), fake news (Piazza, 2022), and hate speech (Piazza, 2020). However, the vast majority of studies have examined factors in isolation and assumed linear relationships between phenomena.1 Political processes are predominantly non-linear in nature (Richards & Doyle, 2000; Timofeev, 2014), and recent literature emphasizes the importance of non-linearity in appreciating aspects that might explain and alleviate polarization (Kelly, 2021; Leonard et al., 2021).

Polarization is characterized by asymmetry and constitutes a collection of complex, ever-evolving systems, the complexity of which necessitates a novel approach to unravel its intricacies (Kelly, 2021; Leonard et al., 2021; Levin et al., 2021). In addition, the socio-technical character of fake news makes standard methods of investigation intrinsically complex (Au et al., 2021). In light of this complexity, this paper acknowledges the phenomenon’s sophistication and leans on complexity theory and configurational analysis in an effort to decipher the causal pathways that influence societal polarization. In particular, this study seeks to illustrate how various sources of political disinformation, hate speech, social media monitoring efforts, and online censorship influence the degree of polarization in societies. In lieu of focusing on the main effects between polarization and its causes, the current study seeks to identify the configurations that explain societal polarization. Consequently, this study addresses the following research question:

  • RQ: What configurations of political disinformation, hate speech, social media monitoring, and internet censorship lead to polarization in societies?

This configural narrative towards the conceptualization of polarization lends credence to recent assertions regarding the impact of fake news in fracturing societies and intensifying polarization. Specifically, the study’s findings point to a balanced view of surveillance and censorship by governments, which may be useful in combating disinformation campaigns on social media and controlling polarization, although such efforts may lend support to a milieu of hate speech that fuels polarization. Additionally, identification of configurations as causal pathways to polarization should serve as crucial insights to social media platform players in recognizing the varying manifestations of disinformation, hate speech and monitoring efforts to deal with societal polarization, while the findings of this study should also assist political scientists and complex systems theorists with a more nuanced understanding of the dynamics of polarization in society.

The paper is structured as follows. In the next section, we review the literature on polarization, political disinformation, hate speech, internet censorship, and social media monitoring and develop the conceptual model based on complexity theory which serves as the theoretical foundation for the study. Section 3 discusses the research methodology and measures utilized for analysis. Section 4 describes the process of data analysis using the configurational approach. Section 5 analyzes the empirical results, and Section 6 discusses the implications to theory and practice. Section 7 highlights limitations alongside discussing future research directions before concluding in Section 8.

Theoretical Background and Conceptual Framework

This section reviews literature related to the key constructs (see Table 11 in the Appendix for details for details) and discusses the theoretical foundations for the study.

Table 11.

Selected studies on the key constructs

Sources Key highlights
Polarization
Baldassarri and Bearman (2007) Discusses the dynamics surrounding polarization through two paradoxes, namely the simultaneous presence and absence of attitude polarization and social polarization.
Iyengar et al. (2012) Demonstrates an increasing dislike and loathing of opponents by the political factions in the United States with partisan effect found inconsistently in policy attitudes.
Gruzd and Roy (2014) Examines user engagement on Twitter as a social media platform and posits that discussions on the forum may embed partisan loyalties and contribute to political polarization.
Boxell et al. (2017) Examines relationship between internet and social media usage and polarization and finds increase in polarization among groups with least likelihood of usage of these technologies.
McCoy et al. (2018) Posits a set of causal patterns to establish links between polarization and impact on democracies coupled with illustrations from four nations as cases.
Enders and Armaly (2019) Finds individual-level differences in actual and perceived polarization in terms of their links to attitudinal and behavioral outcomes of normative interest.
de Bruin et al. (2020) Examines political polarization during the COVID-19 pandemic and found variations in risk perception and risk mitigation preferences based on political inclination.
Piazza (2020) Highlights the mediating role of polarization in the impact of hate speech on domestic terrorism.
Hawdon et al. (2020) Examines likelihood of polarization based on exposure to traditional and social media and further investigates influence of polarization on social capital.
Heaney (2020) Highlights the role of political polarization as a key factor contributing to the ubiquity of protests in the United States.
Arbatli and Rosenberg (2021) Examine links between polarization and government intimidation of opposition by creating a new measure of political polarization.
Levin et al. (2021) Discusses the dynamics surrounding political polarization through an interdisciplinary approach and highlights the interplay of several processes at various levels in the context of this phenomenon
Casola et al. (2022) Finds increasing levels of political polarization surrounding the perceived importance of conservation issues during the COVID-19 pandemic.
Frasz (2022) Highlights the role of increased censorship efforts by governments and other agents in exacerbating political polarization among the citizens
Leonard et al. (2021) Examines drivers of political polarization and factors which account for its asymmetry as a phenomenon.
Argyle and Pope (2022) Examines links between polarization and political participation finding a higher likelihood of political participation by people with more extreme attitudes.
Ertan et al. (2023) Highlights the issue of political polarization during extreme events and its negative impact on response and recovery operations.
Patkós (2023) Highlights the need for a relook at polarization indices in light of an attention shift from ideological aspects to partisan and affective aspects and introduces a partisan polarization index, and compares it with other indices.
Political disinformation
Levendusky (2013) Highlights the role of partisan media outlets and slanted news presentations in leading viewers to develop more negative perceptions and lower trust for other parties alongside lower support for bipartisanship.
Allcott and Gentzkow (2017) Discusses economics of fake news in light of concerns expressed over falsehood amidst the 2016 U.S. presidential elections with a key finding being that people believed stories favoring their preferred political candidate.
Tandoc et al. (2018) Defines a typology of fake news based on level of factivity and deception.
Bradshaw and Howard (2018) Highlights concerns over the use of social media for propaganda and analyzes how states and political parties are using these platforms to shape public opinions at home and on foreign soil.
Freelon and Wells (2020) Discusses sociopolitical factors contributing to disinformation in recent times.
Humprecht et al. (2020) Recognizes the menace of disinformation in democracies and analyzes conditions that contribute to resilience to disinformation through a cross-country study.
Au et al. (2021) Proposes a multi-stage model depicting the pathways from online misinformation and fake news toward ideological polarization.
Serrano-Puche (2021) Highlights the reliance of fake news on emotionally provocative content to induce outrage and its subsequent virality on platforms.
Soares and Recuero (2021) Analyzes political disinformation during the 2018 presidential elections in Brazil and the role of hyper-partisan outlets in shaping discursive struggles and strategies used for dissemination of disinformation.
Bringula et al. (2022) Analyzes factors that contribute to individuals’ susceptibility to disinformation.
Piazza (2022) Highlights role of disinformation in fueling domestic tourism with political polarization mediating the relationship.
Shu (2022) Provides a computational perspective on combating disinformation on digital media.
Davidson and Kobayashi (2022) Investigates individual differences in recall and recognition based on exposure to factually correct and false content.
Pentney (2022) Draws attention to government disinformation which is comparatively less discussed as opposed to other forms of disinformation, and discusses it in conjunction with its regulation and freedom of expression.
Hate speech
Papacharissi (2004) Analyzes messages on political newsgroups online and finds most messages being civil in nature while suggesting that the absence of face-to-face interaction may have resulted in heated discussions.
Waldron (2012) Sheds light on the perils of hate speech and underscores the need for its regulation.
Cohen-Almagor (2014) Defines hate speech (examines how hate speech proliferates on the internet and discusses how the issue can be mitigated.
Iginio et al. (2015) Discusses measures to counter hate speech on the digital channels.
Howard (2019) Contends that the debate on banning hate speech needs disaggregation and discusses the phenomenon alongside freedom of expression.
Bilewicz and Soral (2020) Highlights the impact of hate speech on emotions and behaviors and its influence in breeding intergroup contempt and desensitizing people which reduces the ability to recognize the offensive character of such language.
Paz et al. (2020) Analyzes hate speech discussions, particularly in the legal and communication fields through a review of literature.
Piazza (2020) Finds the role of hate speech by political personalities in boosting online terrorism with the relationship mediated by political polarization.
Kojan et al. (2020) Examines the role of counter-speech as a mechanism in bolstering public deliberation and reducing polarization.
Matamoros-Fernández and Farkas (2021) Reviews literature on hate speech and racism in the context of social media research.
Ali et al. (2022) Discusses methods for hate speech detection on Twitter as a social media platform.
García-Díaz et al. (2022) Analyzes strategies involving feature combinations to detect hate speech in the Spanish language.
Internet censorship and social media monitoring
Lyon (2007) Draws on global examples to analyze surveillance and offers tools for analyzing surveillance trends.
Marwick (2012) Discusses the framing of social surveillance and its distinctions from traditional surveillance along the axes of power (hierarchy and reciprocity
Arsan (2013) Examines censorship and self-censorship in Turkey based on the experiences of journalists in the region.
Westcott and Owen (2013) Examines how lateral surveillance can be leveraged to initiate friendships on social networks with Twitter as the platform.
Richards (2013) Recognizes the harms of surveillance and discusses legal aspects in light of the need to preserve civil liberties
Staples (2014) Discusses surveillance alongside the rise of social media and the impact of surveillance on how we comprehend individuals and our lives.
Trottier (2016) Describes growing social media monitoring practices as a central tenet of surveillance
Busch et al. (2018) Examines internet censorship efforts through blocking mechanisms in liberal democracies
Kruse et al. (2018) Highlights that social media users do not engage in communicative action online due to fear of being surveilled and engage only with politically similar individuals.
Chang and Lin (2020) Demonstrates the use of internet censorship as a reactive mechanism in autocracies to suppress civil society
Cobbe (2021) Discusses algorithm-driven censorship on social media platforms as an approach to content moderation.
Büchi et al. (2022) Discusses the negative impact of surveillance and collection of digital traces on individuals’ communication behaviors online.
Chan et al. (2022) Finds links between surveillance and censorship with varying levels engendering or suppressing political engagement in different ways
Zhai et al. (2022) Discusses the prevalence of internet surveillance and examine the reasons why certain individuals approve of such practices involving censorship and surveillance.

Polarization of Society

Polarization refers to a widening gulf concerning perspectives on various political issues between societal groups (Enders & Armaly, 2019) and represents the distance between opposing political orientations (Kearney, 2019). While some degree of polarization is observed in the majority of pluralistic democracies and can encourage political engagement, a high degree of polarization can lead to estrangement between social groupings and impede democratic dialogue (McCoy et al., 2018). In case of extreme polarization, a society’s typical diversity of distinctions rapidly aligns along a single axis, and individuals increasingly view and describe politics and society in terms of split factions (McCoy et al., 2018). Extreme polarization tends to have a large affective component (Iyengar et al., 2012), and poses a grievous threat to democracy by eroding social cohesiveness and disrupting political stability (McCoy et al., 2018).

Polarization poses challenges that extend well beyond its impacts on political stability and democratic discourse. In recent years, polarization is believed to have hindered the public health response in countries (de Bruin et al., 2020), abetted by disinformation surrounding the pandemic (Casola et al., 2022). In a similar vein, polarization has resulted in divergent perspectives on major issues such as climate change (Hai & Perlman, 2022), domestic terrorism (Piazza, 2020), and increased ubiquity of protests (Heaney, 2020). In an era of deep polarization, a central question concerns the contribution of fake news to polarization in society (Qureshi et al., 2020), while hate speech with partisan leanings is also believed to aggravate polarization (Piazza, 2020). In a similar vein, recent studies also examine instances of social media censorship in countries and highlight that countries that engage in high levels of censorship tend to experience high degree of polarization among their citizens (Frasz, 2022). Nevertheless, despite the fact that censorship and surveillance are closely related, varying levels of these phenomena can either promote or inhibit political participation (Chan et al., 2022), a phenomenon that is closely linked to polarization in the political realm (Argyle & Pope, 2022).

Extant literature has focused primarily on linear investigations into the relationship between polarization and factors such as media censorship (Arbatli & Rosenberg, 2021), fake news (Piazza, 2022), and hate speech (Piazza, 2020). However, these inquiries have predominantly dealt with components in isolation and assumed linear relationships. Political processes are characterized by non-linearity (Richards & Doyle, 2000; Timofeev, 2014), and recent literature underscores the importance of non-linearity in comprehending factors that can explain and combat polarization (Kelly, 2021; Leonard et al., 2021). In view of the complexity of polarization as a phenomenon, understanding its intricacies necessitates an appreciation of its asymmetric nature and a perspective of it as a collection of complex systems comprising interactions between various components (Kelly, 2021; Leonard et al., 2021; Levin et al., 2021). Towards addressing this complexity, we first delve deeper into each of the constructs influencing polarization in the following sections.

Political Disinformation

Disinformation is regarded as false, inaccurate, or misleading information intentionally presented, disseminated, and exploited with a malicious purpose (Directorate-General for Communications Networks, 2018). The role of social media is propagating disinformation in widely acknowledged in literature (Grossman & Goldstein, 2021; Shu, 2022). Research suggests that online falsehood has resulted in ideologically polarized societies (Au et al., 2021). While producers of disinformation seek to alter readers’ perceptions of issues and influence their opinions and actions (Allcott & Gentzkow, 2017), the production of polarizing news also serves commercial interests and helps generate revenue (Tandoc et al., 2018). Although polarized audiences may be motivated to watch news of a partisan nature (Arceneaux & Johnson, 2013), exposure to such partisan news exacerbates the polarization that already prevails (Levendusky, 2013). Disinformation may exploit affective triggers to attract the reader’s attention, using emotionally provocative content to elicit strong feelings of outrage in users and subsequent virality on digital platforms (Serrano-Puche, 2021). Such mobilization of strong emotions is also associated with the rise of ideological polarization in the public sphere (Rosas & Serrano-Puche, 2018).

A wide range of actors may engage in disinformation warfare online. While the modal actors engaged in disinformation campaigns are generally the government ministries in authoritarian regimes, these actors are predominantly the political parties in democratic states (Bradshaw & Howard, 2018). While the former’s disinformation campaigns function alongside efforts to secure cyberinfrastructure and content as well as exertion of pressure on the regulation of cyberspace, the latter focuses primarily on targeting the domestic audience during the elections or major political events (Bradshaw & Howard, 2018). In this context, governments may employ disinformation as a tool for withholding information, which helps them maintain monopoly over information and diminishes society’s power to hold them accountable for inaction when the state controls access to information (Pentney, 2022).

Although the diverse interests outlined here are limited to the government and political parties, foreign adversaries may also have vested interests in intervening in the affairs of another nation. Foreign governments may employ clandestine psychological warfare on social media for a range of persuasion purposes, ranging from influencing users’ attitudes about certain politicians or policies to undermining trust in the government (Arnold et al., 2021). For instance, Russia has been accused of engaging in propaganda in the run-up to the 2016 U.S. presidential election (de Groot, 2022), and Iranian state actors are said to have expanded their disinformation campaign on social media in order to sow dissension inside the U.S. (Bennett, 2021).

The discussion thus far reveals a complex web of disinformation operations by a range of players, including the state, political parties, and foreign governments. However, few studies to date have examined all of these campaigns in tandem (Ruohonen, 2021). In particular, the extent of influence that each type of disinformation wields on societal polarization remains largely underexplored in the extant literature.

Hate Speech

Hate speech refers to abusive or hostile remarks intended to demean a segment of the population on the basis of their actual or perceived innate characteristics (Cohen-Almagor, 2014). Continuous exposure to hate speech can have crippling consequences on an individual’s behavioral and emotional response and breeds intergroup contempt as a result of the pejorative rhetoric that hate speech carries (Bilewicz & Soral, 2020). Hate speech can result in a deterioration of empathy and social norms, causing society to become more receptive to offensive language and less tolerant of communities.

The repercussions of hate speech include fostering an environment of prejudice and intolerance, promoting acts of violence (Iginio et al., 2015), incivility, denying individuals their personal liberties, and stereotyping certain social groups (Papacharissi, 2004). Politicians who incite hate speech against specific ethnic or religious groups in society make those groups’ members more allied with other members of their own group and less tolerant of individuals from rival factions (Piazza, 2020). In this regard, hate speech poses a threat to the social fabric by increasing divisiveness (Piazza, 2020). While research highlights that both hate speech and disinformation result in polarization (Kojan et al., 2020), it can be challenging to distinguish between the two when political discourse becomes increasingly infused with hate (Stokel-Walker, 2019). In light of this blurring distinction between the two phenomena, we contend that the interactions between hate speech and disinformation toward influencing polarization merit scrutiny.

Internet Censorship and Social Media Monitoring

Censorship involves governments’ efforts to suppress the free flow of ideas and information that deviates from the status quo and delineate which communications are acceptable and unacceptable in society (Liu & Wang, 2021). Unlike censorship, which blocks online content, social media monitoring employs surveillance technologies. Online surveillance refers to governments’ ongoing automated efforts to gather, store, and analyze digital traces of particular individuals or the entire population (Büchi et al., 2022; Chan et al., 2022). Censorship and surveillance are inextricably linked in the present day, considering that online surveillance activities may involve intercepting all outgoing internet requests and assessing whether they are for banned websites (Doctorow, 2012).

In light of the internet’s potential to enhance democracy by expanding avenues for political participation (Margetts, 2013), states’ fears and skepticism over this emancipatory potential have resulted in censorship efforts to bar users from gaining access to material from sources the authorities objected to (Busch et al., 2018). Under the guise of national security, governments have been accused of censoring the internet, and such actions are viewed as a danger to civil society in the digital age (Chang & Lin, 2020) through interference in citizens’ online freedom and access to information (Busch et al., 2018). On similar lines, online surveillance is viewed as an advanced form of coercion and social control that hinders the exercise of basic civil liberties (Richards, 2013) by limiting participation and impeding the flow of knowledge in society (Marwick, 2012; Staples, 2014; Trottier, 2016; Westcott & Owen, 2013). The looming threat of social media surveillance in societies may deter citizens from engaging in political debates and diminish the mobilizing power of online news content (Chan et al., 2022).

The spiral of silence theory offers a useful lens to unravel the influence of censorship and surveillance on societal polarization. According to the spiral of silence theory of media effects on public opinion, people opt to speak out against issues based on the climate of opinion (Noelle-Neumann, 1993). However, censorship measures by the state could also be accompanied by self-censorship. For instance, while censorship could be imposed on the journalistic press (Arsan, 2013), journalists also could self-censor to avoid political pressures (Schimpfössl et al., 2020). When individuals self-censor, the opinion climate becomes increasingly homogeneous, and a dominating viewpoint in society is reinforced (Noelle-Neumann, 1993). Similarly, when individuals are aware that their social media activity is being monitored, they tend to recalibrate their behavior (Lyon, 2007). Monitoring content on SNSes thereby leads to constrained behavior on these sites (Trottier, 2016). The monitoring inhibits the freedom of expression on these sites, as users are cautious about expressing their political leanings honestly for fear of grave consequences such as online harassment and its reverberating effects on their work and personal lives (Kruse et al., 2018).

In light of this restrictive nature of publicly articulated viewpoints, public discourse tends to be choked by a spiral of silence that threatens democratic systems (Dahlgren, 2002, 2005) and leads to polarization (Sunstein, 2018). Moreover, given the inseparability of surveillance and censorship in current times (Doctorow, 2012) and discussions of the two concepts in conjunction with one another (Zhai et al., 2022), we contend that a combined perspective could shed light on how their interaction contributes to the polarization of societies.

Complexity Theory and Configurational Analysis

Our study is conceptually grounded in the core tenets of complexity theory and configurational analysis (Rihoux & Ragin, 2008; Woodside, 2017), which aid in building a thorough comprehension of interlinkages between decisions and individuals’ actions in a networked world (Merali, 2006). The theory of complexity “suggests that the whole is greater than the sum of its parts in complex adaptive systems” (Hurlburt, 2013, p. 28) and provides an opportunity to streamline ostensibly complex systems. The theory is predominantly based on the studies of Byrne (1998) and Byrne & Callaghan (2013) and is comprised of two basic tenets which are as follows: (a) the complex context comprises an open system with transformative ability, (b) it has a pathway to accomplish a particular outcome and can be configured towards that purpose (Byrne, 2005). Hence, complex systems are created on the basis that they constitute several complex interacting elements (Frenken, 2006). The explicit modeling of elements such as “conjunctural causation, equifinality, causal asymmetry, and sufficiency” underpins configurational analysis as an approach (Fainshmidt et al., 2020, p. 456). In the setting of conjunctural causation, the outcome is dependent on a combination of causes and configurational analysis assists researchers to theorize and empirical investigate causal complexity (Fainshmidt et al., 2020).

Problems in the real world are incredibly challenging to explicate through precisely symmetrical relations between the antecedents and consequences of interest, as all real-world problems have asymmetric linkages (Ragin, 2008). This asymmetry is used to demonstrate how simple linear models cannot account for complicated interactions (Anderson, 1999). Byrne (2005) expands on this by emphasizing the relationship between the theory of complexity and configurations, which are combinations of characteristics that illuminate the pathways to the outcome of interest. In this aspect, causal complexity has a strong relationship with complexity theory and refers to circumstances in which numerous explanatory factors interact in sophisticated and, at times, divergent patterns, and manifests in equifinality (i.e., an outcome can be explained through several pathways) (Furnari et al., 2021).

In the wake of recent arguments in the IS discipline regarding the appropriateness of conventional statistical methods for modeling complex phenomena, questions have been raised around the assumptions surrounding simple causality2 (Delgosha et al., 2021; El Sawy et al., 2010; Liu et al., 2017). In a similar sense, concerns have been raised around the issues of non-normality of datasets, insufficient attention toward contrarian cases, and multicollinearity (Olya & Mehran, 2017). Configurational perspective is a welcome departure from these conventional approaches since it offers scholars in the IS discipline a new set of tools for data analysis, a unique theoretical base, and a holistic approach to extend and enhance our understanding of constructs (Delgosha et al., 2021).

In the context of the current study, we contend that polarization is best analyzed through the lens of complexity theory (Kelly, 2021; Levin et al., 2021). We support our assertion with two reasons. First, incorporating the principle of equifinality, the outcome of interest can be equally characterized by various sets of condition variables that combine in sufficient configurations for the outcome to occur (Fiss, 2011; Woodside, 2014). In this regard, the significance of propaganda operations in generating societal polarization is obvious from our discussion thus far (Neyazi, 2020). Despite the fact that disinformation campaigns may be executed by a range of entities, including the state, political parties, and foreign governments, few studies have analyzed all of these efforts concurrently to date (Ruohonen, 2021). In particular, the extent of influence each type of disinformation wields on polarizing societies remains largely underexplored in the extant literature. Hate speech contributes to polarization as well (Piazza, 2020), while state-controlled media with censorship efforts could also polarize societies (Zhu, 2019). Hence, configurations resulting in polarization may include combinations of disinformation emerging from the state, political parties, and foreign adversaries coupled with hate speech, censorship, and surveillance efforts by the governments.

Second, leaning on causal asymmetry, the principle contends that the outcome’s occurrence depends on how the condition variables interact with one another and not just their presence or absence (Woodside, 2014). In this regard, while hate speech poses a threat to the social fabric through a deepening of polarization (Piazza, 2020), government efforts to censor online content and monitor social media may limit publicly expressed opinions through a spiral of silence, which could threaten democratic systems (Dahlgren, 2002, 2005) and lead to polarization (Sunstein, 2018). While nationalists may adhere to state-controlled media, liberals may seek less regulated media outlets (Zhu, 2019). In this aspect, censorship creates political divisiveness and breeds extremism (Lane et al., 2021). However, citizens may fail to comprehend the significance of free speech, and certain segments may become accustomed to the stifling of other viewpoints and approve of government crackdowns on nonconformist beliefs (JMcCall, 2022). In addition, recent research also highlights citizens’ awareness of risks posed by free speech, such as disinformation (Knight Foundation, 2022). Also, while research indicates that surveillance and censorship are closely connected, they can result in differing degrees of political engagement or suppression (Chan et al., 2022). This complex concoction of phenomena opens up the possibility of these phenomena interacting in a myriad of possibilities to result in polarization.

The discussion thus far demonstrates the complexity of polarization and its causative mechanisms, while also emphasizing the novelty of configurational analysis, in which the mutual influence of the variables enables researchers to study asymmetrical condition variable-outcome relationships. In this regard, Fig. 1 exemplifies our conceptual framework by displaying the condition variables in various types of political disinformation, hate speech, censorship, and social media monitoring that predict societal polarization, our outcome of interest.

Fig. 1.

Fig. 1

Outcome of interest and condition variables

Methodology

Data

We rely on archival data collected for each of the condition variables and outcome of interest from the Varieties of Democracy database (V-Dem). The V-Dem project’s headquarters is based at the University of Gothenburg, Sweden in the department of political science (Coppedge et al., 2022; Pemstein et al., 2022)  . Our reliance on this archival dataset is driven by two key reasons: (a) the cross-country nature of research in the current study constrains the collection of primary data on account of the enormity of efforts towards undertaking such a study (Frankfort-Nachmias & Nachmias, 2007) (b) publicly available archival data, as noted by researchers (e.g., Arayankalam & Krishnan, 2021; Jarvenpa, 1991) is accompanied by several benefits including, but not limited to easy replication (Calantone & Vickery, 2010), robustness to the threat of common method bias (Woszczynski & Whitman, 2004), etc.

Notably, the V-Dem project team comprises over 3,500 country experts who support the data collection process by providing expert information through online surveys. The project applies a rigorous process for recruiting country experts. The research team identifies between one hundred and two hundred experts in each country. To preserve objectivity, the study team also avoids prominent individuals, such as political party members. In addition, individuals from varied disciplines, including academia, the media, and the judicial system, are included to ensure a pool with diverse expertise. The project team also ensures the construct validity of instruments by conducting tests based on differential item functioning, a method for evaluating the cross-cultural validity of measures and permits the pooling of data for macro-level investigations (Arayankalam & Krishnan, 2021; Tennant et al., 2004).

Following the initial identification of a pool of potential experts for each country, basic information of each expert (e.g., educational qualifications, current position, area of documented expertise, etc.) is compiled (Coppedge et al., 2022). Prior to the completion of coding, country coordinators engage in multiple interactions with all the experts to ensure internal data consistency. The research team also ensures that at least three out of five experts for each country are natives or permanent residents (Coppedge et al., 2022). However, in case of countries where it is difficult to identify in-country experts who are qualified and independent of the prevailing regime, allowances are made to steer clear of potential western or northern biases in coding (Coppedge et al., 2022). Despite this step to minimize biases, the possibility of experts exhibiting varying levels of reliability and biases cannot be ruled out (Coppedge et al., 2022). In light of this fact, the V-Dem team employs rigorous statistical techniques to leverage patterns of cross-rater disagreements to estimate variations in reliability and biases and adjusts the estimates of the latent, indirectly observed concepts being measured (Coppedge et al., 2022). In doing so, the process enhances the confidence in reliability of the estimates provided by the team and corrects for measurement errors, if any (Coppedge et al., 2022). As a final step, to account for any remaining errors, the research team administers a post-survey questionnaire to each expert, capturing their understanding of democracy in the country and other countries with diverse characteristics of democracy or autocracy, incorporating some of these output values into the measurement model and others ex post to examine validity of the model’s output (Coppedge et al., 2022).

In summary, the process ensures that the final values offered by the V-Dem database are optimized for any potential biases and fit for empirical evaluation by ensuring high levels of validity and reliability of the data through adherence to strict protocols (Coppedge et al., 2022; Pemstein et al., 2022). The database has been utilized in studies published in reputed journals such as the British Journal of Political Science (e.g., Edgell et al., 2022) and American Journal of Political Science (e.g., Claassen, 2020) while it has also been leveraged for disinformation studies in the information systems (IS) domain (Arayankalam & Krishnan, 2021). Our dataset for performing the fuzzy-set qualitative comparative analysis (fsQCA) consists of data from 177 countries for 2021, with two countries excluded for missing data.3 The complete list of countries included in the study is provided in Appendix A.

Condition Variables and Outcome

The outcome of interest in this study is polarization in society, and the condition variables are (a) the government’s dissemination of disinformation on social media, (b) political parties’ dissemination of disinformation on social media, (c) disinformation by foreign governments, (d) hate speech by political parties, (e) censorship of the internet and (f) monitoring of social media by the government. The condition variables related to disinformation and hate speech are derived from the measures provided by the digital society project which focuses on the political environment of the internet and social media (Mechkova et al., 2019). Throughout this article, we refer to the dissemination of disinformation to influence the population within the country as domestic disinformation. In this context, the three disinformation-related condition variables seek to understand separately the extent to which the governments, political parties, and foreign adversaries and their agents leverage social media to influence the populace and the political climate in the country. The condition related to hate speech seeks to measure how often major political parties indulge in hate speech in their rhetoric, whereas the conditions pertaining to censorship and social media monitoring quantify the extent to which governments censor information on digital media and monitor political content on SNSs. Each of these variables and outcome was operationalized using the measures from the V-Dem database (Coppedge et al., 2022; Pemstein et al., 2022). Table 1 provides a summary of the measures, their description, and the questions used to operationalize them.

Table 1.

Condition variables and outcome of interest

Measure Description Question used to operationalize this measure
Conditions
  DIG Domestic disinformation by government “How often do the government and its agents use social media to disseminate misleading viewpoints or false information to influence its own population?” (Coppedge et al., 2022, p. 319)
  DIP Domestic disinformation by political parties “How often do major political parties and candidates for office use social media to disseminate misleading viewpoints or false information to influence their own population?” (Coppedge et al., 2022, p. 320)
  DIF Disinformation by foreign governments “How routinely do foreign governments and their agents use social media to disseminate misleading viewpoints or false information to influence domestic politics in this country?” (Coppedge et al., 2022, p. 321)
  HAT Hate speech by political parties “How often do major political parties use hate speech as part of their rhetoric?” (Coppedge et al., 2022, p. 334)
  CIE Efforts towards censorship of internet “Does the government attempt to censor information (text, audio, or visuals) on the Internet?” (Coppedge et al., 2022, p. 202)
  GSM Monitoring of social media by the government “How comprehensive is the surveillance of political content in social media by the government or its agents?” (Coppedge et al., 2022, p. 324)
Outcome of interest
  POL Polarization in society “Is society polarized into antagonistic, political camps?” (Coppedge et al., 2022, p. 227)

To make the standardized values of the condition variables more intuitive, the notation for country score with respect to each variable was reversed, seeking guidance from prior literature which have leveraged a similar approach (Arayankalam & Krishnan, 2021; MacCabe et al., 2012). A summary of the variables and descriptive statistics is provided below (see Table 2).

Table 2.

Condition variables influencing polarization as the outcome of interest

Measures Description M SD Min Max
DIG Domestic disinformation by government 0.196 1.551 -2.858 3.555
DIP Domestic disinformation by political parties 0.264 1.269 -2.993 3.566
DIF Disinformation by foreign governments 0.016 1.281 -2.496 4.194
HAT Hate speech by political parties -0.084 1.328 -2.761 2.963
CIE Efforts towards censorship of internet -0.351 1.398 -2.215 4.147
GSM Monitoring of social media by the government 0.117 1.402 -2.885 3.689
POL Polarization in society 0.290 1.396 -3.033 4.085

DIG Government’s dissemination of disinformation (domestic), DIP Political parties’ dissemination of disinformation (domestic), DIF Disinformation by foreign governments, HAT Hate speech by political parties, GSM Monitoring of social media by the government, CIE Efforts towards censorship of internet, POL Polarization of society, M Mean, SD Standard deviation, Min Minimum value, Max Maximum value

Data Analyses and Results

Contrarian Case Analysis

When two variables are examined and an influence of one over the other is established as a main effect, it signifies that this holds true for the majority of instances in the sample (Pappas et al., 2016a). However, an association of an opposite kind may also manifest in some instances, necessitating investigation for such occurrences as well (Woodside, 2014). In this regard, contrarian analysis facilitates the detection of such relationships, hence justifying the rationale for configurational analysis (Woodside, 2014). This analysis is rarely carried out in research articles (Pappas & Woodside, 2021). Seeking guidance from Pappas et al. (2020), quintiles are created on each variable, and cross-tabulations are performed across these quintiles. Results are published in Table 3 below. Contrarian cases are denoted by bold markings, whilst the main effect is shown in italics. The presence of significant number of contrarian cases signifies the need for QCA analysis.

Table 3.

Results from the contrarian case analysis

POL POL
1 2 3 4 5 1 2 3 4 5
GSM (phi = 0.467, p < 0.05) 1 16 (9.04%) 5 (2.82%) 7 (3.95%) 4 (2.26%) 3 (1.69%) CIE (phi = 0.422, p < 0.05) 1 15 (8.47%) 6 (3.39%) 4 (2.26%) 7 (3.95%) 3 (1.69%)
2 4 (2.26%) 11 (6.21%) 10 (5.65%) 5 (2.82%) 6 (3.39%) 2 6 (3.39%) 7 (3.95%) 9 (5.08%) 9 (5.08%) 5 (2.82%)
3 2 (1.13%) 9 (5.08%) 7 (3.95%) 7 (3.95%) 10 (5.65%) 3 2 (1.13%) 12 (6.78%) 9 (5.08%) 4 (2.26%) 8 (4.52%)
4 3 (1.69%) 9 (5.08%) 7 (3.95%) 10 (5.65%) 7 (3.95%) 4 5 (2.82%) 4 (2.26%) 8 (4.52%) 11 (6.21%) 8 (4.52%)
5 10 (5.65%) 2 (1.13%) 4 (2.26%) 10 (5.65%) 9 (5.08%) 5 7 (3.95%) 7 (3.95%) 5 (2.82%) 5 (2.82%) 11 (6.21%)
DIG (phi = 0.534, p < 0.001) 1 16 (9.04%) 8 (4.52%) 8 (4.52%) 3 (1.69%) 0 (0%) DIP (phi = 0.589, p < 0.001) 1 17 (9.6%) 7 (3.95%) 8 (4.52%) 2 (1.13%) 1 (0.56%)
2 5 (2.82%) 11 (6.21%) 12 (6.78%) 4 (2.26%) 4 (2.26%) 2 9 (5.08%) 11 (6.21%) 10 (5.65%) 2 (1.13%) 4 (2.26%)
3 6 (3.39%) 4 (2.26%) 7 (3.95%) 12 (6.78%) 6 (3.39%) 3 5 (2.82%) 5 (2.82%) 11 (6.21%) 8 (4.52%) 7 (3.95%)
4 3 (1.69%) 8 (4.52%) 4 (2.26%) 9 (5.08%) 12 (6.78%) 4 2 (1.13%) 8 (4.52%) 2 (1.13%) 11 (6.21%) 12 (6.78%)
5 5 (2.82%) 5 (2.82%) 4 (2.26%) 8 (4.52%) 13 (7.34%) 5 2 (1.13%) 5 (2.82%) 4 (2.26%) 13 (7.34%) 11 (6.21%)
HAT (phi = 0.766, p < 0.001) 1 23 (12.99%) 9 (5.08%) 2 (1.13%) 1 (0.56%) 0 (0%) DIF (phi = 0.408, p < 0.05) 1 10 (5.65%) 11 (6.21%) 6 (3.39%) 5 (2.82%) 3 (1.69%)
2 7 (3.95%) 11 (6.21%) 10 (5.65%) 6 (3.39%) 2 (1.13%) 2 6 (3.39%) 6 (3.39%) 10 (5.65%) 7 (3.95%) 7 (3.95%)
3 2 (1.13%) 6 (3.39%) 13 (7.34%) 8 (4.52%) 6 (3.39%) 3 6 (3.39%) 8 (4.52%) 10 (5.65%) 7 (3.95%) 4 (2.26%)
4 2 (1.13%) 8 (4.52%) 7 (3.95%) 7 (3.95%) 12 (6.78%) 4 5 (2.82%) 8 (4.52%) 8 (4.52%) 9 (5.08%) 6 (3.39%)
5 1 (0.56%) 2 (1.13%) 3 (1.69%) 14 (7.91%) 15 (8.47%) 5 8 (4.52%) 3 (1.69%) 1 (0.56%) 8 (4.52%) 15 (8.47%)

Cases in bold represent contrarian cases. Cases in italics represent the main effect. DIG Government’s dissemination of disinformation (domestic), DIP Political parties’ dissemination of disinformation (domestic), DIF Disinformation by foreign governments, HAT Hate speech by political parties, GSM Monitoring of social media by the government, CIE Efforts towards censorship of internet, POL Polarization of society

fsQCA Method

fsQCA is a set-theoretical method that integrates fuzzy logic and fuzzy sets with QCA (Ragin, 2000). The advantages of this approach result from the restrictions imposed by conventional regression-based techniques (Woodside, 2017). In contrast to regression-based methods where leaving out a relevant variable reduces the explanatory power, especially so if the missing variable in correlated with the included ones which results in biased estimates, QCA relies on Boolean algebra rather than correlations, thereby eliminating the issue of omitted variable bias (Fainshmidt et al., 2020). An implication of this is the “absence of a methodological requirement for control variables” when evaluating QCA-based results (Fainshmidt et al., 2020, p. 458).

The method empirically investigates the relation between the outcome and the multiple combinations of condition variables that can explain it as necessary and sufficient conditions. In this process, we first calibrate the raw data, and they analyze it for the presence of necessary conditions, which form a superset of the outcome (Ragin, 2008). This implies that the outcome’s membership score is lower when compared with the causal conditions’ fuzzy-set membership score for each of the cases in the sample being analyzed (Pappas et al., 2020). As a next step, we analyze the data and cases for a set of sufficient conditions which combine towards the outcome of interest. We then interpret the results and finally test for predictive validity of the model.

Data Calibration

The next step in configurational analysis is data calibration, which entails converting variable values to fuzzy set membership scores (Ragin, 2008). Calibration helps transform ordinal or interval-scale data into set membership scores through either of two methods, namely direct and indirect method of calibration. In the direct method, the researchers select three qualitative breakpoints or anchors for full membership, full non-membership, and a crossover point. As opposed to this method, the indirect method requires researchers to rescale through a qualitative assessment. The choice of method is dependent on data and the researchers’ expertise (Ragin, 2008). However, the direct method is advised, more common, and given that the choice of thresholds is clearer, it bolsters the replicability and validity of the findings and leads to more rigorous studies (Pappas & Woodside, 2021).

In line with this guidance, we employed the direct method of calibration in the current study. Towards calibration of the data, we chose the three thresholds as 0.95, 0.50, and 0.05. This results in a log-odd metric post the data transformation with values in the range of 0 to 1 (see Table 4). Seeking guidance from Pappas and Woodside (2021), we used the percentiles to compute the threshold (or breakpoint) values and used these to calibrate the data in fsQCA (see table for percentiles used as threshold values). In line with guidance from Fiss (2011), a constant value of 0.001 was added to variable conditions with values below 1 as fsQCA has a challenge evaluating cases with membership scores of exactly 0.5 (Delgosha et al., 2021; Pappas et al., 2020; Ragin, 2008). This correction ensures that none of the cases are left out of the analysis while having no effect on the results (Pappas et al., 2020).

Table 4.

Percentile thresholds used for calibration of condition variables and outcome of interest

Percentile thresholds DIG DIP DIF HAT CIE GSM POL
5% -2.276 -1.650 -1.861 -2.144 -2.023 -2.057 -2.114
50% 0.393 0.247 -0.105 -0.149 -0.671 0.136 0.189
95% 2.972 2.647 2.277 2.114 2.174 2.441 2.854

DIG Government’s dissemination of disinformation (domestic), DIP Political parties’ dissemination of disinformation (domestic), DIF Disinformation by foreign governments, HAT Hate speech by political parties, GSM Monitoring of social media by the government, CIE Efforts towards censorship of internet, POL Polarization of society. Values rounded to three decimal places

Analyzing Necessary Conditions

The necessity condition analysis evaluates if an individual condition must be present for the outcome’s occurrence. This analysis examines the extent to which instances of the outcome concur in displaying the condition variable that is believed to be a required condition for the occurrence of the outcome, as well as the particular variable’s empirical relevance in terms of consistency and coverage (Muñoz et al., 2022). Elaborating further, the result of this process step implies that when an outcome is present, the condition deemed necessary is also present such that the outcome’s occurrence is contingent on the condition variable (Schneider et al., 2012). The results of the necessary condition analysis for high and low degree of polarization are presented in this section (Ragin & Davey, 2016). Conditions with a consistency value over the threshold of 0.9 are regarded as necessary conditions (Ragin, 2008). However, in the context of the current study, the necessity condition analyses demonstrate that among all the conditions, no single condition is a necessary condition for either high or low level of polarization. The results are shown in Table 5.

Table 5.

Analysis of necessity results for high and low levels of polarization

Condition variables POL  ~ POL
Consistency Coverage Consistency Coverage
DIG 0.720 0.778 0.549 0.577
 ~ DIG 0.609 0.581 0.789 0.733
DIP 0.745 0.787 0.549 0.564
 ~ DIP 0.587 0.572 0.792 0.752
DIF 0.702 0.725 0.614 0.617
 ~ DIF 0.629 0.626 0.726 0.703
HAT 0.804 0.818 0.530 0.525
 ~ HAT 0.533 0.538 0.816 0.802
CIE 0.680 0.714 0.590 0.602
 ~ CIE 0.621 0.608 0.720 0.687
GSM 0.702 0.726 0.597 0.601
 ~ GSM 0.614 0.610 0.728 0.704

DIG Government’s dissemination of disinformation (domestic), DIP Political parties’ dissemination of disinformation (domestic), DIF Disinformation by foreign governments, HAT Hate speech by political parties, GSM Monitoring of social media by the government, CIE Efforts towards censorship of internet, POL Polarization of society, ~ denotes negation

Analyzing Sufficient Conditions

As opposed to the necessary condition analysis at the level of each individual condition, sufficient condition analysis examines if a set of conditions or a particular condition is capable of producing the outcome. Post calibration of data, a truth table comprising 2 k rows was constructed, with each row representative of a possible configuration of the k condition variables. Utilizing the criteria of frequency and consistency, the truth table was further refined (Ragin, 2008). The frequency cut-off point is chosen to secure a minimum number of empirical cases for configurational evaluation.

Setting frequency thresholds for the truth table involves a trade-off wherein higher thresholds indicate more cases in the sample for each configuration but reduces the sample percentage (or coverage) explained by the configurations that are retained. In contrast, lower thresholds boost the coverage, but fewer cases in the sample are represented by each configuration. Based on guidance from prior literature (Fiss, 2011; Pappas & Woodside, 2021; Ragin, 2008), samples with cases in excess of 150 may use a frequency threshold of 3. Our sample comprises 177 cases and hence, we opt for 3 as the frequency threshold towards reduction of the truth table.

Second, in accordance with recommendations from past research, a minimum consistency threshold of 0.75 must be specified after excluding configurations with low frequency (Rihoux & Ragin, 2008). At this stage, the researcher is required to identify natural breaking points and opt for a suitable threshold with appropriate justification (Pappas & Woodside, 2021). Towards aiding the researcher in determining the threshold, fsQCA provides another measure, namely proportional reduction in consistency (PRI), which aids in preventing concurrent subset relations of configurations in the outcome and its negated state. It is recommended to set PRI consistency threshold higher than 0.5 as values below this threshold are regarded significantly inconsistent (Greckhamer et al., 2018). On this basis, we set the cut-off for PRI consistency at 0.75 and set the consistency threshold at 0.95 for high degree of polarization as the outcome. In the context of low degree of polarization as the outcome, given the lower values of raw consistency and PRI consistency (with only one configuration having a PRI consistency value above 0.75), we chose a PRI consistency threshold of 0.5 and the raw consistency threshold of 0.9.

FsQCA computes three solutions for the outcome of interest, namely the complex, parsimonious and intermediate solutions. The parsimonious solution is a simplified form of the complex solution, presenting the most important conditions that cannot be excluded from any solution and are also known as core conditions, whereas the complex solution presents all possible combinations of conditions when traditional logical operations are used (Fiss, 2011). In contrast, the intermediate solution includes the parsimonious solution and is a component of the complex solution. Additionally, whereas core conditions are present in both the intermediate and parsimonious solutions, peripheral conditions are those that are excluded in the parsimonious solution and only occur in the intermediate solution (Fiss, 2011). In this regard, core conditions exhibit a strong causal relationship with the outcome, given their presence in both the intermediate and parsimonious solutions (Ho et al., 2016).

The results are depicted in Table 6. For a configuration to be deemed sufficient, the consistency value must be greater than 0.75 (Pappas & Woodside, 2021) and the coverage value must exceed 0.2 (Rasoolimanesh et al., 2021). In the current context, all configurations (or models) confirm to this requirement and may be regarded as the set of sufficiency configurations. The analysis reveals three solutions each for high and low degree of polarization. The solutions in Table 6 explain the outcome in high degree of polarization with raw coverage indicating 33 percent to 29 percent of the cases associated with the outcome, while the outcome in low degree of polarization is explained through a range of 57 percent to 32 percent of the cases associated with it.

Table 6.

Configurations for high and low degree of polarization

graphic file with name 10796_2023_10390_Tab6_HTML.jpg

Assessing Predictive Validity

A key test of robustness of the fsQCA analysis is to examine if the model performs equally well on a different sample by assessing the predictive validity (Pappas et al., 2016b; Woodside, 2014). Although it is not frequently evaluated, this test assumes enormous significance given that a model with good fit may not always explain the outcome to expectations (Pappas et al., 2020). Seeking guidance from Pappas et al. (2019), this test follows a two-step process. First, the sample is randomly partitioned into a subsample and a holdout sample, and then the fsQCA analysis is conducted on the subsample. The results of this subsample analysis are then compared to the holdout sample. In both instances, the samples must explain the outcome to a similar extent.

The configurations comprising complex antecedent conditions are consistent indicators of the outcome of interest, i.e., high degree of polarization, with overall solution consistency and coverage of 0.948 and 0.439, respectively (see Table 7 for details). Every solution in the table below represents a model that should be plotted against the outcome of interest using the data from the holdout sample. The findings based on these plots (see figure) indicate highly consistent models with consistencies and coverage in the range of 0.935 to 0.995 and 0.244 to 0.337, respectively (see Fig. 2).

Table 7.

Complex configurations indicating high degree of polarization for the subsample

Models from Subsample for high degree of polarization Raw coverage Unique coverage Consistency
1. ~ GSM* ~ CIE*DIG*DIP* ~ DIF* ~ HAT 0.279 0.066 0.953
2. GSM*CIE*DIG*DIP* ~ DIF*HAT 0.313 0.067 0.967
3. ~ GSM*CIE*DIG*DIP*DIF*HAT 0.291 0.051 0.985
Overall solution coverage 0.439
Overall solution consistency 0.948

DIG Government’s dissemination of disinformation (domestic), DIP Political parties’ dissemination of disinformation (domestic), DIF Disinformation by foreign governments, HAT Hate speech by political parties, GSM Monitoring of social media by the government, CIE Efforts towards censorship of internet, POL Polarization of society, ~ denotes negation

Fig. 2.

Fig. 2

Testing for predictive validity

Supplemental Analysis

Prior political research suggests using lagged variables as a robustness check due to the dynamic nature of data (Kenny, 2020). Hence, to confirm the validity of results in the event of a lag,4 we incorporated the V-Dem database’s 2020 data for the condition variables (Coppedge et al., 2021; Pemstein et al., 2021) while we retained the values for polarization as outcome from V-Dem’s 2021 data (Coppedge et al., 2022; Pemstein et al., 2022).

We used the direct method of data calibration. The thresholds based on this method are listed in Table 8.

Table 8.

Percentile thresholds used for calibration of condition variables and outcome of interest

Percentile thresholds DIG DIP DIF HAT CIE GSM POL
5% -2.367 -1.505 -1.867 -1.987 -1.928 -2.064 -2.114
50% 0.202 0.271 -0.009 -0.006 -0.521 0.118 0.189
95% 2.566 2.320 2.167 2.154 2.114 2.443 2.854

DIG Government’s dissemination of disinformation (domestic), DIP Political parties’ dissemination of disinformation (domestic), DIF Disinformation by foreign governments, HAT Hate speech by political parties, GSM Monitoring of social media by the government, CIE Efforts towards censorship of internet, POL Polarization of society. Values rounded to three decimal places

The necessary condition analysis (see Table 9,) reveals that none of the conditions is a necessity in itself for the realization of the outcome.

Table 9.

Analysis of necessity results

Condition variables POL  ~ POL
Consistency Coverage Consistency Coverage
DIG 0.743 0.763 0.571 0.571
 ~ DIG 0.581 0.582 0.763 0.743
DIP 0.751 0.774 0.555 0.557
 ~ DIP 0.571 0.569 0.775 0.751
DIF 0.711 0.726 0.619 0.615
 ~ DIF 0.623 0.627 0.724 0.709
HAT 0.775 0.803 0.516 0.520
 ~ HAT 0.537 0.533 0.804 0.776
CIE 0.657 0.723 0.568 0.609
 ~ CIE 0.645 0.605 0.741 0.678
GSM 0.712 0.728 0.609 0.606
 ~ GSM 0.615 0.618 0.727 0.711

DIG Government’s dissemination of disinformation (domestic), DIP Political parties’ dissemination of disinformation (domestic), DIF Disinformation by foreign governments, HAT Hate speech by political parties, GSM Monitoring of social media by the government, CIE Efforts towards censorship of internet, POL Polarization of society, ~ denotes negation

Similar to the previous instance, in the context of sufficiency analysis (see Table 10), we set the cut-off for PRI consistency at 0.75 and set the consistency threshold at 0.95 for high degree of polarization as the outcome. In the context of low degree of polarization as the outcome, we chose a PRI consistency threshold of 0.5 and the raw consistency threshold of 0.9.

Table 10.

Configurations for high and low degree of polarization

graphic file with name 10796_2023_10390_Tab10_HTML.jpg

Interpreting the Results

The results from the primary analysis reveal six different combinations of disinformation, hate speech, internet censorship, and social media monitoring that are associated with high and low degrees of polarization. In the context of high degree of polarization, foreign disinformation and hate speech as core conditions are reinforced by complimentary ingredients in the form of disinformation by the state and political parties in two configurations combined with the absence of either one or both surveillance and censorship across the three configurations. Solutions S1H and S2H present combinations in which all forms of disinformation, namely disinformation by the state, political parties, and foreign governments and hate speech by political parties are present. In addition, censorship of internet is absent in solution S1H while monitoring of social media by government is absent in solution S2H. Solution S3H presents a combination in which disinformation peddled by foreign governments is present along with hate speech by political parties. However, all other forms of disinformation are absent along with an absence of both internet censorship and monitoring of social media by the government.

For a low degree of polarization, the absence of hate speech as a core condition common across the three configurations is reinforced by other peripheral conditions which vary across the three configurations and are discussed below. Solution S1L presents a combination where all forms of disinformation and hate speech are absent and a low degree of polarization is present regardless of internet censorship and monitoring of social media by the government. Solution S2L represents a combination wherein most conditions are absent, and the solution is indifferent to disinformation by foreign governments. Solution S3L represents a combination wherein hate speech is absent, disinformation by the state and foreign governments is present, both social media monitoring and internet censorship are present, and a low degree of polarization is present regardless of disinformation by political parties.

In summary, the results of the main analysis suggests that there exist multiple, equally effective configurations of disinformation variants that cause polarization in societies. The configurations also reveal that single condition variables may be present or absent depending on how it combines with other conditions, which is indicative of causal asymmetry. Lastly, at least one variant of disinformation is present as a condition variable across all configurations which represent high degree of polarization.

Moving on to an analysis of data with a one-year lag between condition variables and the outcome, the analysis reveals two configurations associated with a high level of polarization (S1SH and S2SH) that are comparable to those from the primary analysis (S1H and S2H) although they are distinct in terms of core and peripheral conditions. Configurations S3SH and S5SH reveal that disinformation by government and political parties and the absence of foreign disinformation operations result in a high degree of polarization while governments’ role in censorship and monitoring of social media along with hate speech in society may either be present (S5SH) or absent (S3SH). However, when all forms of disinformation are absent, censorship and monitoring of social media combined with hate speech could result in high level of polarization (S4SH). On the contrary, three configurations associated with low degree of polarization reveal the absence or minor role of most conditions (S1SL to S3SL). However, even if governments engage in disinformation operations, monitoring of social media and censorship combined with an absence of both hate speech and disinformation by political parties ensures low levels of polarization (S4SL).

Discussion

The current study leverages complexity theory and argues that various forms of disinformation, hate speech, government monitoring of social media, and internet censorship form configurations that predict the degree of societal fragmentation. To this end, a conceptual model that serves as the basis for identifying configurations was constructed.

The primary analysis underscores the core function of disinformation by foreign adversaries and hate rhetoric in the setting of highly polarized societies. However, these conditions may or may not be complemented by disinformation by the state and political parties, while censorship or surveillance is absent across the configurations (S1H-S3H). In contrast, the supplementary analysis related to high polarization reveals disinformation by the state and political parties emerging as more prominent through their presence across four configurations (S1SH, S2SH, S3SH, S5SH) as opposed to foreign disinformation present only across two of the configurations (S1SH, S2SH). This supports recent claims around domestic disinformation emerging as a greater threat than disinformation from foreign adversaries (Bentzen, 2021) while the possibility of blurring lines between the two disinformation variants with foreign agents engaging local actors as proxies to carry out disinformation operations (Polyakova & Fried, 2020) cannot be ruled out. In addition, the supplemental analysis reveals that hate speech remains a core condition across four configurations (S1SH, S2SH, S4SH, S5SH). A key difference is the presence of censorship and surveillance along with hate speech across two configurations (S4SH and S5SH) which highlights that hate speech’s role in fragmenting societies may be bolstered by the surveillance and censorship efforts. This aligns with recent literature highlighting that censorship may be used to suppress dissent and that hate speech laws could be misused to silence political viewpoints that are not viewed as favorable by the state (Brown & Sinclair, 2019). In this regard, we contend that while opposing viewpoints are suppressed, a certain political faction may continue to engage in hate rhetoric, and the amplification of this narrative in light of other views suppressed by censorship efforts may fuel polarization.

In the context of low polarization, hate rhetoric is absent across all configurations in the primary analysis. In addition, while disinformation dissemination by government and political parties is absent in two of the three configurations (S1L, S2L), polarization is contained even if the state and foreign adversaries engage in disinformation operations in the country (S3L). It is likely that polarization control is made possible by the prevalence of social media monitoring and internet censorship. This is further substantiated by the results of the supplementary analysis which highlights the role of surveillance and censorship in containing polarization despite states’ indulgence in disinformation operations (S4SL).

In conclusion, although the significance of propaganda in promoting societal polarization is well-established in the literature (Neyazi, 2020), the variable influence of various forms of disinformation is rarely addressed (Ruohonen, 2021). In this context, the current study emphasizes the central role of political disinformation on social media and the importance of hate speech in polarizing societies. Also, in contrast to the commonly held belief that internet censorship and social media monitoring stifle fundamental liberties (Richards, 2013; Zappin et al., 2022), this study emphasizes the role of internet censorship and social media monitoring in keeping polarization in check despite the spread of disinformation by a variety of actors while it also highlights that these surveillance and censorship measures in combination with hate speech may fuel polarization in societies.

Implications

Implications for Research

The findings of the study have four implications for research. First, our study offers a novel configural conceptualization of polarization by leveraging fsQCA as a method. Recent research emphasizes the intricacy surrounding polarization as a phenomenon and the fact that it consists of a collection of complex systems with interactions between its numerous components (Kelly, 2021; Levin et al., 2021). Recognizing this sophistication, the current study, leaning on complexity theory as the theoretical foundation, uses fsQCA as a set-theoretical method to unravel the interactions among causal mechanisms which combine in several ways to result in polarized societies. In doing so, we engage in an exploratory analysis of societal polarization and offer a configurational perspective of the phenomenon which suggests that combinations of conditions comprising disinformation, hate speech, censorship and surveillance explain polarization better than an individual condition in isolation. Through this configural analysis, we not only highlight the asymmetric and equifinal nature of conditions coming together to result in polarization, but we also pave the way for future studies to analyze the specifics using symmetric methods, including experiments and observational research, to enhance our understanding of polarization, while asymmetric methods may be further leveraged to expand on the conditions contributing to polarization.

Second, by demonstrating the significance of disinformation in polarizing societies with partisan factions, the study provides credence to recent claims on polarization and societal divisions as outcomes of fake news (George et al., 2021). Comparing various types of disinformation and their function in polarizing societies, the study highlights the crippling impacts of the issue in such countries. Third, the study underscores the unabated proliferation of disinformation and its far-reaching consequences. Although attention to fake news soared during the 2016 US presidential elections, during which the scourge was partially attributed to foreign players (Shane, 2017), the analysis reveals that the issue endures unabatedly. In this regard, the primary analysis highlights the continued dominance of foreign disinformation on social media. Foreign disinformation campaigns on social media are more nuanced than they are portrayed to be (Cohen et al., 2021) and the findings serve as decisive insights for political scientists to prioritize attention on foreign disinformation due to the near-certain menacing ramifications of the phenomenon. Nonetheless, the results of the supplemental analysis indicate that domestic disinformation cannot be ignored either. Fourth, the study highlights the overpowering nature of social media monitoring and censorship of the internet by governments in combating disinformation campaigns on social media. While political scientists have been vocal in accusing censorship and monitoring of social media in oppressing freedom of speech online (Shahbaz & Funk, 2021), such bans may serve as “necessary evils” in conditions when accessing social media can do more harm than good amidst a saturated environment of political disinformation on social media websites. Censorship may help combat such campaigns (Golovchenko, 2022), and the insights from this study serve as a call for political scientists to broaden their horizons and take a balanced perspective on the harms and benefits of online censorship and monitoring efforts. The supplemental analysis indicates, however, that such perspectives must proceed with caution, since censorship and monitoring measures in a setting of hate speech may reinforce the hate rhetoric and aggravate polarization.

Implications for IS Research

There are three specific implications of this work for IS research. First, the findings provide IS scholars insights into the causal mechanisms leading to polarization. Recent research analyzing countermeasures to cope with increased online fragmentation, such as flagging false content and increasing exposure to opposing perspectives have predominantly yielded conflicting results, rendering these methods ineffectual (Lin et al., 2022). In this context, our research endeavors to elucidate the intricacies surrounding polarization as a concept and emphasizes the necessity to evaluate countermeasures not in isolation but in conjunction with one another, taking into account the interconnections that contribute to the polarization of societies.

Second, the findings offer crucial insights to IS researchers on how information on SNSes affects users’ perceptions and engagement with fake news. The results indicate that the perceptions have far-reaching consequences by translating to offline behaviors, creating divisive factions split on partisan views. In particular, recent research highlights the role of opinion amplification in causing extreme polarization on social networks (Lim & Bentley, 2022), which indicates a looming possibility of falsehood and polarization amplifying each other (Cinelli et al., 2021a, b; Das et al., 2023; Vicario et al., 2019) with the need to tackle falsehood becoming crucial towards breaking this vicious cycle (Das et al., 2023). In this regard, the current research prompts IS researchers to view disinformation as a nuanced phenomenon constituting several variants and sheds light on the differing effects of variants on polarization.

Lastly, the insights offer the necessary direction to IS researchers collaborating in the design of pedagogical interventions to raise awareness around political disinformation and improve digital literacy among the citizens. The study reveals that such pedagogical interventions must instill an understanding of disinformation and propaganda strategies employed by diverse entities, such as foreign governments, and its far-reaching effects on polarizing societies. In addition, these campaigns must raise awareness about hate speech and provide a balanced perspective on social media monitoring and internet censorship by highlighting both the problematic and beneficial features of such containment measures. Furthermore, falsehood and polarization extend beyond the political sphere with far-reaching impact in other domains such as healthcare (e.g., Vraga & Bode, 2020). In this regard, the findings of the study equip health IS researchers with important perspectives to utilize when countering the effect of falsehood on polarized views in the healthcare domain and aid the development of corrective mechanisms to reduce belief in health-related false information.

Implications to Practice

The findings of the study have four implications for practice. First, the results serve as crucial insights to business executives seeking to grow their businesses in an increasingly polarized environment. In this regard, polarization poses an economic risk with its destabilizing effects creating uncertainty for firms (David, 2022; Hawkins, 2023). While speaking out on societal issues in polarized nations carries a significant risk for businesses, they are expected to provide trustworthy information backed by evidence and to take a stance on topics such as climate change and discrimination (Hawkins, 2023). In this sense, our study provides crucial insights for executives to comprehend a climate of polarization based on the myriad of causal factors outlined above, and to recalibrate their messaging accordingly to avoid becoming mired in politics.

Second, our study provides empirical evidence of the complexities underpinning polarization as a phenomenon. The World Economic Forum’s 2023 global risks report identifies polarization as one of the most imminent and long-term threats, and emphasizes the role of misinformation and disinformation in accelerating the erosion of societal cohesion (WEF, 2023; Zahidi, 2023). In light of this crisis, the findings of our study prompt policymakers to incorporate a multi-faceted approach to comprehend the shaping of polarization and formulate appropriate responses to preserve societal cohesion, not only by controlling the flow of disinformation, but also by calibrating the surveillance and censorship efforts to restore citizens’ trust in institutions. In addition, the findings highlight the need for policymakers to draw legislation aimed at constantly curbing hate speech. Governments are starting to recognize the momentum hate speech gathers online and the role of social media platforms in this context (RTÉ, 2022). Given the definitive role of hate speech in polarizing societies and the role of online media in this context, the findings offer the necessary impetus for a longstanding collaboration between policymakers and social media players in curbing this menace wherein drawing out standards must be accompanied by stricter enforcement of those standards to check the spread of hate speech.

Third, the study’s findings serve as crucial insights to SNS platform players, given the immense pressure on platforms as a result of their significance as a breeding ground for disinformation, hate speech, and the exacerbation of societal divisions. As social media prevalence and everyday use increase globally (Chaffey, 2023), recent research indicates that greater social media penetration exacerbates division, leading to civil unrest and violence in societies (Faigle, 2023; Hunter et al., 2023). Amidst rising calls for SNSes to check the menace of polarization, the insights from the study offer clarity on the causal mechanisms underpinning the phenomenon and aid platform players in designing SNS platform standards and procedures for monitoring, flagging, and removal of content that threatens to advance propaganda efforts. The findings also serve as a direction for technologists to step up algorithmic efforts to curb online disinformation.

Fourth, the study offers valuable insights to political analysts seeking a delicate balance of censorship and surveillance in democratic societies. For instance, recent research indicates that censorship efforts may benefit societies affected by foreign propaganda (Golovchenko, 2022), although some bans such as the Kremlin-backed RT’s ban in the European Union to combat disinformation efforts was met with resistance from the European Federation of Journalists over concerns that such bans could restrain the freedom of expression (EFJ, 2022). In light of the need to strike a fine balance to suppress disinformation without hampering free speech, the findings drive political analysts towards a more balanced perspective on the benefits and harms of censorship and monitoring efforts by governments by suggesting that such efforts must be viewed in the context of the online information environment and related disorders, and any inferences must be recalibrated to incorporate this broader perspective.

Limitations and Future Research Directions

The study’s findings must be interpreted in light of three particular limitations. Given the inherent challenge of obtaining primary data for a cross-country comparison, the study begins by drawing on secondary data from reputable sources. This information enables us to cast a wide net and analyze outcomes across 177 countries, which exceeds 90 percent of the world’s nations. Moreover, considering the credible nature of the data source and the stringent data collection standards, the data are deemed suitable for performing such a study. However, future research may explore the link between polarization and propaganda operations in distinct sociocultural contexts that may display specific patterns as a result of the influence of unique externalities (e.g., the impact of the 2020 US presidential elections on disinformation operations in the country). Second, fsQCA is utilized to identify configurations associated with high and low levels of polarization in societies. In this context, it is important to note that fsQCA does not identify the unique contribution of each variable to the outcome, but rather identifies how these variables combine to result in the outcome (Pappas et al., 2019). Future research may incorporate a combination of other statistical methods with fsQCA to acquire a deeper understanding of the data, and this integration of knowledge derived from various empirical methods may lead to the development of new conceptual models relating to polarization and disinformation efforts. In addition, researchers could seek to extend the current analysis by incorporating more pertinent conditions into the fsQCA analysis in an effort to increase the explanatory power of the model (Fainshmidt et al., 2020), while regression-based techniques could allow researchers to control for the effects of specific variables on the model. Lastly, while QCA helps model conjunctural causation and sheds light on the causal complexity, conjunctural causation as a concept is separate from causality, and QCA cannot demonstrate causality on its own (Fainshmidt et al., 2020). While we attempted to alleviate these concerns subtly by incorporating a one-year lag between the conditions and the outcome, future research must investigate causal relations, including the possibility of reverse causality, using experimental methods and draw causal inferences from time series analysis using econometric methods.

Conclusion

ICT has been extensively recognized for its role in enhancing professional and personal life, and its impact on a range of societal dimensions, such as economic development, healthcare, education, and networking, is largely acknowledged (Oh et al., 2018). However, amidst this bright side of research focusing on ICT for development, the dark side merits scrutiny. In this context, research on polarization in society and its causal patterns is gaining pace as the phenomenon starts to spread like wildfire across nations. While past research acknowledges the role of propaganda tactics and hate speech in fueling polarization, countermeasures in the form of online social media monitoring and censorship efforts by governments have drawn severe criticism from various quarters. In the context of numerous political actors indulging in propaganda, the present study is among the first to offer a nuanced perspective of how these disinformation campaigns combined with hate speech, censorship, and surveillance efforts culminate in polarized societies. The findings from two sets of analysis highlight the crucial role of disinformation and hate speech in dividing societies on partisan grounds. The study also offers a balanced perspective on online surveillance and censorship, which may help keep polarization in check in a climate of disinformation, although such efforts may also bolster a milieu of hate rhetoric and fuel polarization. In summary, we contend that the study unravels interesting insights into the phenomena of disinformation, hate speech, online censoring, and monitoring efforts, enriching IS literature on how these phenomena thrive online and contribute to societal fragmentation.

Acknowledgements

Satish Krishnan thanks the Indian Institute of Management Kozhikode’s Chair Associate Professorship for supporting this research.

Biographies

Pramukh Nanjundaswamy Vasist

is a doctoral student in the Information Systems Area at the Indian Institute of Management (IIM) Kozhikode. His research interests include fake news, social media, mobile addiction, and related behavioral and managerial issues in the context of information systems. He has published in leading journals, such as the International Journal of Hospitality Management, Communications of the Association for Information Systems, Internet Research, and the Journal of Cleaner Production. He has also published in preeminent conferences, including the Australasian Conference of Information Systems (ACIS) and AIMS International Conference on Management. He has served as a reviewer in journals and conferences such as IIM Kozhikode Society and Management Review, Pacific Asia Conference on Information Systems (PACIS), ACIS, Academy of Management (AOM) Annual Conference, and IIM World Management Conference (WMC). He won the award for outstanding paper at the twentieth AIMS International Conference on Management and best track paper at GIT 2021 Conference.

Debashis Chatterjee

is the Director at the Indian Institute of Management (IIM) Kozhikode. He has taught leadership classes at Harvard University and at the Indian Institute of Management (IIM), Calcutta, Lucknow, and Kozhikode in thirty years of academic life. He has been awarded the prestigious Fulbright Fellowship twice for Pre-Doctoral and Post-Doctoral work at the Kennedy School of Government at Harvard University. His published work includes seventeen books including Leading Consciously (Foreword by Peter M Senge) and Timeless Leadership (Wiley) that have been translated in several international languages. He has trained more than fifty thousand managers globally in Fortune 100 Corporations and over twenty thousand school principals and teachers. His books “Can You Teach A Zebra Some Algebra” and “Invincible Arjuna” have emerged as path-breaking books in the education space in India. He has served as leadership coach to political leaders and CEOs of major Indian organizations. He has served as Dean of an international business school in Singapore. A pioneer in the field of Asian models in leadership, Professor Chatterjee has previously served a successful term (2009- 2014) as the Director at IIM Kozhikode before returning for a second term in 2018. Dean and tenured Professor at IIM Lucknow and Mentor Director of IIM Amritsar (2018–19), Prof. Chatterjee is credited with transforming IIM Kozhikode from an obscure regional school to an institution of national impact and global recognition. He also served as Director General of IMI Delhi, India’s first corporate sponsored business school and as Independent Director on the Boards of several multinational and Indian Companies. He can be reached at: www.debchat.com.

Satish Krishnan

received his Ph.D. in Information Systems from the National University of Singapore. He is the Chair Associate Professor of Information Systems at the Indian Institute of Management (IIM) Kozhikode. His research interests include IT resistance, fake news and disinformation, gender gap, e-government, e-business, virtual social networks, technostress, cyberloafing, and cyberbullying. He has published in leading journals, such as the Journal of Applied Psychology, Organizational Behavior and Human Decision Processes, Information and Management, International Journal of Information Management, Journal of Association for Information Science and Technology, International Journal of Hospitality Management, Communications of the Association for Information Systems, Computers in Human Behavior, Information Systems Frontiers, Scandinavian Journal of Information Systems, Technological Forecasting and Social Change, Journal of Retailing and Consumer Services, Human Resource Development Review, Journal of Global Information Technology Management, and e-Service Journal. He is on the editorial boards of various journals such as Internet Research, Technological Forecasting and Social Change, Information Systems Frontiers, International Journal of Information Management, and Computers in Human Behavior. He also serves at conferences such as PACIS and ICIS as Track Chair, Panel and Workshop Chair, Review Coordinator, or Associate Editor. He won the Outstanding Associate Editor Award at ICIS 2017, ICIS 2019, and ICIS 2022 and the Best Reviewer Award at PACIS 2022. Recognizing his substantial contributions to management research, the Association of Indian Management Scholars awarded him the 2022 Outstanding Young Management Researcher Award.

Appendix A List of countries for both main and supplemental analysis

Afghanistan, Albania, Algeria, Angola, Argentina, Armenia, Australia, Austria, Azerbaijan, Bahrain, Bangladesh, Barbados, Belarus, Belgium, Benin, Bhutan, Bolivia, Bosnia and Herzegovina, Botswana, Brazil, Bulgaria, Burkina Faso, Burma/Myanmar, Burundi, Cambodia, Cameroon, Canada, Cape Verde, Central African Republic, Chad, Chile, China, Colombia, Comoros, Costa Rica, Croatia, Cuba, Cyprus, Czech Republic, Democratic Republic of the Congo, Denmark, Djibouti, Dominican Republic, Ecuador, Egypt, El Salvador, Equatorial Guinea, Eritrea, Estonia, Eswatini, Ethiopia, Fiji, Finland, France, Gabon, Georgia, Germany, Ghana, Greece, Guatemala, Guinea, Guinea Bissau, Guyana, Haiti, Honduras, Hong Kong, Hungary, Iceland, India, Indonesia, Iran, Iraq, Ireland, Israel, Italy, Ivory Coast, Jamaica, Japan, Jordan, Kazakhstan, Kenya, Kosovo, Kuwait, Kyrgyzstan, Laos, Latvia, Lebanon, Lesotho, Libya, Lithuania, Luxembourg, Madagascar, Malawi, Malaysia, Maldives, Mali, Malta, Mauritania, Mauritius, Mexico, Moldova, Mongolia, Montenegro, Morocco, Mozambique, Namibia, Nepal, Netherlands, New Zealand, Nicaragua, Niger, Nigeria, North Korea, North Macedonia, Norway, Oman, Pakistan, Palestine/Gaza, Palestine/West Bank, Panama, Paraguay, Peru, Philippines, Poland, Portugal, Qatar, Republic of the Congo, Romania, Russia, Rwanda, Sao Tome and Principe, Saudi Arabia, Senegal, Serbia, Seychelles, Sierra Leone, Singapore, Slovakia, Slovenia, Solomon Islands, Somalia, Somaliland, South Africa, South Korea, South Sudan, Spain, Sri Lanka, Sudan, Suriname, Sweden, Switzerland, Syria, Taiwan, Tajikistan, Tanzania, Thailand, The Gambia, Timor Leste, Togo, Trinidad and Tobago, Tunisia, Turkey, Turkmenistan, Uganda, Ukraine, United Arab Emirates, United Kingdom, United States of America, Uruguay, Uzbekistan, Vanuatu, Venezuela, Vietnam, Yemen, Zambia, Zanzibar, Zimbabwe; N = 177.

Appendix B

Declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Footnotes

1

We thank Referee #3 for this thought.

2

Simple causality refers to causality, which is additive, symmetric and unifinal in nature.

3

Liberia and Papua New Guinea did not have any data for societal polarization and hence were excluded from the analysis.

4

We thank Referee #2 for this thought.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Pramukh Nanjundaswamy Vasist, Email: pramukhn02phdpt@iimk.ac.in.

Debashis Chatterjee, Email: director@iimk.ac.in, Email: dciiml@gmail.com.

Satish Krishnan, Email: satishk@iimk.ac.in.

References

  1. Aanestad, M., Kankanhalli, A., Maruping, L., Pang, M.-S., & Ram, S. (2021). Digital Technologies and Social Justice. MIS Quarterly. https://misq.umn.edu/skin/frontend/default/misq/pdf/CurrentCalls/SI_DigitalTechnologies.pdf.
  2. Ahmed, Syed Ishtiaque, Vannini, Sara, O’Neill, Jacki, & Toyama, Kentaro. (2022). Special Issue on: “Information & Communication Technologies and Development—Advances towards a better world with better technologies.” Information Systems Frontiers. https://resource-cms.springernature.com/springer-cms/rest/v1/content/20051926/data/v1. Accessed 23 Sep 2022.
  3. Alba, Davey, & Satariano, Adam. (2019, September 26). At Least 70 Countries Have Had Disinformation Campaigns, Study Finds. The New York Times. https://www.nytimes.com/2019/09/26/technology/government-disinformation-cyber-troops.html. Accessed 6 Sep 2022.
  4. Ali R, Farooq U, Arshad U, Shahzad W, Beg MO. Hate speech detection on Twitter using transfer learning. Computer Speech & Language. 2022;74:101365. doi: 10.1016/j.csl.2022.101365. [DOI] [Google Scholar]
  5. Allcott H, Gentzkow M. Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives. 2017;31(2):211–236. doi: 10.1257/jep.31.2.211. [DOI] [Google Scholar]
  6. Anderson P. Perspective: Complexity Theory and Organization Science. Organization Science. 1999;10(3):216–232. doi: 10.1287/orsc.10.3.216. [DOI] [Google Scholar]
  7. Arayankalam J, Krishnan S. Relating foreign disinformation through social media, domestic online media fractionalization, government’s control over cyberspace, and social media-induced offline violence: Insights from the agenda-building theoretical perspective. Technological Forecasting and Social Change. 2021;166:120661. doi: 10.1016/j.techfore.2021.120661. [DOI] [Google Scholar]
  8. Arbatli E, Rosenberg D. United we stand, divided we rule: How political polarization erodes democracy. Democratization. 2021;28(2):285–307. doi: 10.1080/13510347.2020.1818068. [DOI] [Google Scholar]
  9. Arceneaux, Kevin, & Johnson, Martin. (2013). Changing Minds or Changing Channels?: Partisan News in an Age of Choice. University of Chicago Press.
  10. Argyle LP, Pope JC. Does Political Participation Contribute to Polarization in the United States? Public Opinion Quarterly. 2022;86(3):697–707. doi: 10.1093/poq/nfac036. [DOI] [Google Scholar]
  11. Arnold, J. R., Reckendorf, A., & Wintersieck, A. L. (2021). Source alerts can reduce the harms of foreign disinformation. Harvard Kennedy School Misinformation Reviewhttps://misinforeview.hks.harvard.edu/article/source-alerts-can-reduce-the-harms-of-foreign-disinformation/.
  12. Arora SD, Singh GP, Chakraborty A, Maity M. Polarization and social media: A systematic review and research agenda. Technological Forecasting and Social Change. 2022;183:121942. doi: 10.1016/j.techfore.2022.121942. [DOI] [Google Scholar]
  13. Arsan E. Killing Me Softly with His Words: Censorship and Self-Censorship from the Perspective of Turkish Journalists. Turkish Studies. 2013;14(3):447–462. doi: 10.1080/14683849.2013.833017. [DOI] [Google Scholar]
  14. Au, C. H., Ho, K. K. W., & Chiu, D. K. W. (2021). The role of online misinformation and fake news in ideological polarization: Barriers, catalysts, and implications. Information Systems Frontiers, 1-24. https://link.springer.com/article/10.1007/s10796-021-10133-9.
  15. Baldassarri D, Bearman P. Dynamics of Political Polarization. American Sociological Review. 2007;72(5):784–811. doi: 10.1177/000312240707200507. [DOI] [Google Scholar]
  16. Bennett, Brian. (2021, June 9). Exclusive: Iran Steps up Efforts to Sow Discord Inside U.S. | Time. https://time.com/6071615/iran-disinformation-united-states/. Accessed 9 Sep 2022.
  17. Bentley CM, Nemer D, Vannini S. “When words become unclear”: Unmasking ICT through visual methodologies in participatory ICT4D. AI & Society. 2019;34(3):477–493. doi: 10.1007/s00146-017-0762-z. [DOI] [Google Scholar]
  18. Bentzen, Naja. (2021). Trump’s disinformation “magaphone”: Consequences, first lessons and outlook. https://policycommons.net/artifacts/1426551/trumps-disinformation-magaphone/2040994/. Accessed 21 Dec 2022.
  19. Bilewicz, Michał, & Soral, Wiktor. (2020). Hate Speech Epidemic. The Dynamic Effects of Derogatory Language on Intergroup Relations and Political Radicalization. Political Psychology, 41(S1), 3–33.
  20. Boxell L, Gentzkow M, Shapiro JM. Greater Internet use is not associated with faster growth in political polarization among US demographic groups. Proceedings of the National Academy of Sciences. 2017;114(40):10612–10617. doi: 10.1073/pnas.1706588114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Bradshaw, Samantha, & Howard, Philip N. (2018). The global organization of social media disinformation campaigns. Journal of International Affairs, 71(1.5), 23–32.
  22. Bringula, Rex P., Catacutan-Bangit, Annaliza E., Garcia, Manuel B., Gonzales, John Paul S., & Valderama, Arlene Mae C. (2022). “Who is gullible to political disinformation?”: Predicting susceptibility of university students to fake news. Journal of Information Technology & Politics, 19(2), 165–179.
  23. Brown A, Sinclair A. The Politics of Hate Speech Laws. Routledge; 2019. [Google Scholar]
  24. Büchi M, Festic N, Latzer M. The Chilling Effects of Digital Dataveillance: A Theoretical Model and an Empirical Research Agenda. Big Data & Society. 2022;9(1):20539517211065370. doi: 10.1177/20539517211065368. [DOI] [Google Scholar]
  25. Busch A, Theiner P, Breindl Y. Internet Censorship in Liberal Democracies: Learning from Autocracies? In: Schwanholz J, Graham T, Stoll P-T, editors. Managing Democracy in the Digital Age: Internet Regulation, Social Media Use, and Online Civic Engagement. Springer International Publishing; 2018. pp. 11–28. [Google Scholar]
  26. Byrne, David. (1998). Complexity Theory and the Social Sciences: An Introduction. Routledge.
  27. Byrne D. Complexity, Configurations and Cases. Theory, Culture & Society. 2005;22(5):95–111. doi: 10.1177/0263276405057194. [DOI] [Google Scholar]
  28. Byrne, David, & Callaghan, Gillian. (2013). Complexity Theory and the Social Sciences: The state of the art. Routledge.
  29. Calantone RJ, Vickery SK. Introduction to the Special Topic Forum: Using Archival and Secondary Data Sources in Supply Chain Management Research. Journal of Supply Chain Management. 2010;46(4):3–11. doi: 10.1111/j.1745-493X.2010.03202.x. [DOI] [Google Scholar]
  30. Casola WR, Beall JM, Nils Peterson M, Larson LR, Brent Jackson S, Stevenson KT. Political polarization of conservation issues in the era of COVID-19: An examination of partisan perspectives and priorities in the United States. Journal for Nature Conservation. 2022;67:126176. doi: 10.1016/j.jnc.2022.126176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Castaño-Pulgarín, Sergio Andrés, Suárez-Betancur, Natalia, Vega, Luz Magnolia Tilano, & López, Harvey Mauricio Herrera. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608.
  32. Chaffey, Dave. (2023, January 30). Global social media statistics research summary 2022 [June 2022]. Smart Insights. https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/. Accessed 14 Feb 2023.
  33. Chan, M., Yi, J., & Kuznetsov, D. (2022). Government digital repression and political engagement: A cross-national multilevel analysis examining the roles of online surveillance and censorship. The International Journal of Press/Politics, 19401612221117104. https://journals.sagepub.com/doi/abs/10.1177/19401612221117106.
  34. Chang C-C, Lin T-H. Autocracy login: Internet censorship and civil society in the digital age. Democratization. 2020;27(5):874–895. doi: 10.1080/13510347.2020.1747051. [DOI] [Google Scholar]
  35. Cinelli, Matteo, Pelicon, Andraž, Mozetič, Igor, Quattrociocchi, Walter, Novak, Petra Kralj, & Zollo, Fabiana. (2021a). Online Hate: Behavioural Dynamics and Relationship with Misinformation (arXiv:2105.14005). arXiv. [DOI] [PMC free article] [PubMed]
  36. Cinelli, Matteo, Pelicon, Andraž, Mozetič, Igor, Quattrociocchi, Walter, Novak, Petra Kralj, & Zollo, Fabiana. (2021b). Dynamics of online hate and misinformation. Scientific Reports, 11(1), 1. [DOI] [PMC free article] [PubMed]
  37. Claassen C. Does Public Support Help Democracy Survive? American Journal of Political Science. 2020;64(1):118–134. doi: 10.1111/ajps.12452. [DOI] [Google Scholar]
  38. Cobbe J. Algorithmic Censorship by Social Platforms: Power and Resistance. Philosophy & Technology. 2021;34(4):739–766. doi: 10.1007/s13347-020-00429-0. [DOI] [Google Scholar]
  39. Cohen, Raphael S., Beauchamp-Mustafaga, Nathan, Cheravitch, Joe, Demus, Alyssa, Harold, Scott W., Hornung, Jeffrey W., Jun, Jenny, Schwille, Michael, Treyger, Elina, & Vest, Nathan. (2021). Combating Foreign Disinformation on Social Media: Study Overview and Conclusions. RAND Corporation. https://www.rand.org/pubs/research_reports/RR4373z1.html.
  40. Cohen-Almagor R. Countering Hate on the Internet. Jahrbuch Fur Recht Und Ethik. 2014;22:431. [Google Scholar]
  41. Coppedge, M., Gerring, J., Knutsen, C. H., Lindberg, S. I., Teorell, J., Alizada, N., Altman, Bernhard, M., Cornell, A., Fish, M. S., Gastaldi, L., Gjerløw, H., Glynn, A., Grahn, S., Hicken, A., Hindle, G., Ilchenko, N., Kinzelbach, K., Krusell, J., … Ziblatt, D. (2022). V-Dem Dataset v12. Varieties of Democracy (V-Dem) Project. 10.23696/vdemds22.
  42. Coppedge, Michael, Gerring, John, Knutsen, Carl Henrik, Lindberg, Staffan I., Teorell, Jan, Alizada, Nazifa, Altman, David, Bernhard, Michael, Cornell, Agnes, Fish, M. Steven, Gastaldi, Lisa, Gjerløw, Haakon, Glynn, Adam, Hicken, Allen, Hindle, Garry, Ilchenko, Nina, Krusell, Joshua, L¨uhrmann, Anna, Maerz, Seraphine F., … Ziblatt, Daniel. (2021). V-Dem Dataset v11.1. Varieties of Democracy (V-Dem) Project. 10.23696/vdemds21.
  43. Coppedge, Michael, Gerring, John, Knutsen, Carl Henrik, Lindberg, Staffan I., Teorell, Jan, Marquardt, Kyle L., Medzihorsky, Juraj, Pemstein, Daniel, Alizada, Nazifa, Gastaldi, Lisa, Hindle, Garry, Pernes, Josefine, Römer, Johannes von, Tzelgov, Eitan, Wang, Yi-ting, & Wilson, Steven. (2022). V-Dem Methodology v12. Varieties of Democracy (V-Dem) Project.
  44. Dahlgren P. In Search of the Talkative Public: Media, Deliberative Democracy and Civic Culture. Javnost - the Public. 2002;9(3):5–25. doi: 10.1080/13183222.2002.11008804. [DOI] [Google Scholar]
  45. Dahlgren P. The Internet, Public Spheres, and Political Communication: Dispersion and Deliberation. Political Communication. 2005;22(2):147–162. doi: 10.1080/10584600590933160. [DOI] [Google Scholar]
  46. Das A, Liu H, Kovatchev V, Lease M. The state of human-centered NLP technology for fact-checking. Information Processing & Management. 2023;60(2):103219. doi: 10.1016/j.ipm.2022.103219. [DOI] [Google Scholar]
  47. David, Javier E. (2022, November 12). Political volatility is becoming an economic risk. Axios. https://www.axios.com/2022/11/12/politics-economy-midterms-risk-investors. Accessed 14 Feb 2023.
  48. Davidson, Brenna M., & Kobayashi, Tetsuro. (2022). The effect of message modality on memory for political disinformation: Lessons from the 2021 U.S capitol riots. Computers in Human Behavior, 132, 107241.
  49. de Bruin, Wändi Bruine, Saw, Htay-Wah, & Goldman, Dana P. (2020). Political polarization in US residents’ COVID-19 risk perceptions, policy preferences, and protective behaviors. Journal of Risk and Uncertainty, 61(2), 177–194. [DOI] [PMC free article] [PubMed]
  50. de Groot, Kristen. (2022, February 10). Russia’s disinformation campaign against the U.S. Penn Today. https://penntoday.upenn.edu/news/russias-disinformation-campaign-against-us. Accessed 9 Sep 2022.
  51. De Salle, Adam. (2020, June 16). ‘The Social Contract.’ Voice Magazine. https://www.voicemag.uk/blog/7302/the-social-contract. Accessed 18 Feb 2023.
  52. Delgosha MS, Saheb T, Hajiheydari N. Modelling the Asymmetrical Relationships between Digitalisation and Sustainable Competitiveness: A Cross-Country Configurational Analysis. Information Systems Frontiers. 2021;23(5):1317–1337. doi: 10.1007/s10796-020-10029-0. [DOI] [Google Scholar]
  53. Directorate-General for Communications Networks, Content and Technology (European Commission). (2018). A multi-dimensional approach to disinformation: Report of the independent High level Group on fake news and online disinformation. Publications Office of the European Union. https://data.europa.eu/doi/10.2759/739290. Accessed 9 Sep 2022.
  54. Doctorow, Cory. (2012, March 2). Censorship is inseparable from surveillance. The Guardian. https://www.theguardian.com/technology/2012/mar/02/censorship-inseperable-from-surveillance. Accessed 14 Feb 2023.
  55. Edgell AB, Boese VA, Maerz SF, Lindenfors P, Lindberg SI. The Institutional Order of Liberalization. British Journal of Political Science. 2022;52(3):1465–1471. doi: 10.1017/S000712342100020X. [DOI] [Google Scholar]
  56. EFJ. (2022, January 3). Fighting disinformation with censorship is a mistake. European Federation of Journalists; European Federation of Journalists. https://europeanjournalists.org/blog/2022/03/01/fighting-disinformation-with-censorship-is-a-mistake/. Accessed 26 Jan 2023.
  57. Enders AM, Armaly MT. The Differential Effects of Actual and Perceived Polarization. Political Behavior. 2019;41(3):815–839. doi: 10.1007/s11109-018-9476-2. [DOI] [Google Scholar]
  58. Ertan G, Comfort L, Martin Ö. Political Polarization during Extreme Events. Natural Hazards Review. 2023;24(1):06022001. doi: 10.1061/(ASCE)NH.1527-6996.0000603. [DOI] [Google Scholar]
  59. Faigle, Kevin. (2023, January 18). New study shows link between social media posts and civil disobedience, violence. Jagwire. https://jagwire.augusta.edu/new-study-shows-link-between-social-media-posts-and-civil-disobedience-violence/. Accessed 14 Feb 2023.
  60. Fainshmidt S, Witt MA, Aguilera RV, Verbeke A. The contributions of qualitative comparative analysis (QCA) to international business research. Journal of International Business Studies. 2020;51(4):455–466. doi: 10.1057/s41267-020-00313-1. [DOI] [Google Scholar]
  61. Fiss PC. Building Better Causal Theories: A Fuzzy Set Approach to Typologies in Organization Research. Academy of Management Journal. 2011;54(2):393–420. doi: 10.5465/amj.2011.60263120. [DOI] [Google Scholar]
  62. Frankfort-Nachmias, C., & Nachmias, D. (2007). Study guide for research methods in the social sciences (7th ed.). Worth Publications.
  63. Frasz, Sofia. (2022). Impacts of Censorship On Political Polarization. Honors Program Theses. https://scholarship.rollins.edu/honors/175.
  64. Freelon D, Wells C. Disinformation as Political Communication. Political Communication. 2020;37(2):145–156. doi: 10.1080/10584609.2020.1723755. [DOI] [Google Scholar]
  65. Frenken K. Technological innovation and complexity theory. Economics of Innovation and New Technology. 2006;15(2):137–155. doi: 10.1080/10438590500141453. [DOI] [Google Scholar]
  66. Furnari S, Crilly D, Misangyi VF, Greckhamer T, Fiss PC, Aguilera RV. Capturing Causal Complexity: Heuristics for Configurational Theorizing. Academy of Management Review. 2021;46(4):778–799. doi: 10.5465/amr.2019.0298. [DOI] [Google Scholar]
  67. Gallacher, John D., Heerdink, Marc W., & Hewstone, Miles. (2021). Online Engagement Between Opposing Political Protest Groups via Social Media is Linked to Physical Violence of Offline Encounters. Social Media + Society, 7(1), 2056305120984445.
  68. García-Díaz, J. A., Jiménez-Zafra, S. M., García-Cumbreras, M. A., & Valencia-García, R. (2022). Evaluating feature combination strategies for hate-speech detection in Spanish using linguistic features and transformers. Complex & Intelligent Systems, 1-22. https://link.springer.com/article/10.1007/s40747-022-00693-x.
  69. George J, Gerhart N, Torres R. Uncovering the Truth about Fake News: A Research Model Grounded in Multi-Disciplinary Literature. Journal of Management Information Systems. 2021;38(4):1067–1094. doi: 10.1080/07421222.2021.1990608. [DOI] [Google Scholar]
  70. Golovchenko Y. Fighting Propaganda with Censorship: A Study of the Ukrainian Ban on Russian Social Media. The Journal of Politics. 2022;84(2):639–654. doi: 10.1086/716949. [DOI] [Google Scholar]
  71. González-Quiñones F, Machin-Mastromatteo JD. On media censorship, freedom of expression and the risks of journalism in Mexico. Information Development. 2019;35(4):666–670. doi: 10.1177/0266666919866266. [DOI] [Google Scholar]
  72. Greckhamer T, Furnari S, Fiss PC, Aguilera RV. Studying configurations with qualitative comparative analysis: Best practices in strategy and organization research. Strategic Organization. 2018;16(4):482–495. doi: 10.1177/1476127018786487. [DOI] [Google Scholar]
  73. Grossman, Shelby, Josh A. Goldstein and. (2021, January 4). How disinformation evolved in 2020. Brookings. https://www.brookings.edu/techstream/how-disinformation-evolved-in-2020/. Accessed 23 Sep 2022.
  74. Gruzd A, Roy J. Investigating Political Polarization on Twitter: A Canadian Perspective. Policy & Internet. 2014;6(1):28–45. doi: 10.1002/1944-2866.POI354. [DOI] [Google Scholar]
  75. Hai, Zuhad, & Perlman, Rebecca L. (2022). Extreme weather events and the politics of climate change attribution. Science Advances, 8(36), eabo2190. [DOI] [PMC free article] [PubMed]
  76. Hatmaker, Taylor. (2022, August 26). Meta, Twitter purge accounts spreading pro-US propaganda. TechCrunch. https://social.techcrunch.com/2022/08/25/meta-and-twitter-graphika-us-influence-campaign/. Accessed 6 Sep 2022.
  77. Hawdon, James, Ranganathan, Shyam, Leman, Scotland, Bookhultz, Shane, & Mitra, Tanushree. (2020). Social Media Use, Political Polarization, and Social Capital: Is Social Media Tearing the U.S. Apart? In Gabriele Meiselwitz (Ed.), Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis (pp. 243–260). Springer International Publishing.
  78. Hawkins, Sara Fischer, Eleanor. (2023, January 16). Polarization eats the developed world. Axios. https://www.axios.com/2023/01/16/political-polarization-developed-world. Accessed 14 Feb 2023.
  79. Heaney MT. Protest at the center of American Politics. Journal of International Affairs. 2020;73(2):195–208. [Google Scholar]
  80. Ho J, Plewa C, Lu VN. Examining strategic orientation complementarity using multiple regression analysis and fuzzy set QCA. Journal of Business Research. 2016;69(6):2199–2205. doi: 10.1016/j.jbusres.2015.12.030. [DOI] [Google Scholar]
  81. Howard JW. Free Speech and Hate Speech. Annual Review of Political Science. 2019;22(1):93–109. doi: 10.1146/annurev-polisci-051517-012343. [DOI] [Google Scholar]
  82. Humprecht E, Esser F, Van Aelst Peter. Resilience to Online Disinformation: A Framework for Cross-National Comparative Research. The International Journal of Press/politics. 2020;25(3):493–516. doi: 10.1177/1940161219900126. [DOI] [Google Scholar]
  83. Hunter, Lance Y., Biglaiser, Glen, McGauvran, Ronald J., & Collins, Leann. (2023). The effects of social media on domestic terrorism. Behavioral Sciences of Terrorism and Political Aggression, 0(0), 1–25.
  84. Hurlburt GF. Complexity Theory: A New Paradigm for Software Integration. IT Professional. 2013;15(3):26–31. doi: 10.1109/MITP.2012.87. [DOI] [Google Scholar]
  85. Iginio, Gagliardone, Danit, Gal, Thiago, Alves, & Gabriela, Martinez. (2015). Countering online hate speech. UNESCO Publishing.
  86. Iyengar S, Sood G, Lelkes Y. Affect, Not Ideology: A Social Identity Perspective on Polarization. Public Opinion Quarterly. 2012;76(3):405–431. doi: 10.1093/poq/nfs038. [DOI] [Google Scholar]
  87. Jarvenpa, S. L. (1991). Panning for Gold in Information Systems Research Second-Hand Data. In Information Systems Research: Contemporary Approaches and Emergent Traditions (pp. 63–80). Elsevier Science. https://www.academia.edu/10111140/Jarvenpaa_1991_Panning_for_Gold_in_Information_Systems_Research_Second_Hand_Data. Accessed 21 Aug 2022.
  88. Jeffrey M. McCall, opinion contributor. (2022, January 11). Ending nation’s polarization begins with understanding First Amendment [Text]. The Hill. https://thehill.com/opinion/civil-rights/589143-ending-nations-polarization-begins-with-understanding-first-amendment/. Accessed 10 Sep 2022.
  89. Kearney MW. Analyzing change in network polarization. New Media & Society. 2019;21(6):1380–1402. doi: 10.1177/1461444818822813. [DOI] [Google Scholar]
  90. Kelly, Morgan. (2021, December 9). Political polarization and its echo chambers: Surprising new, cross-disciplinary perspectives from Princeton. Princeton University. https://www.princeton.edu/news/2021/12/09/political-polarization-and-its-echo-chambers-surprising-new-cross-disciplinary. Accessed 6 Sep 2022.
  91. Kenny PD. “The Enemy of the People”: Populists and Press Freedom. Political Research Quarterly. 2020;73(2):261–275. doi: 10.1177/1065912918824038. [DOI] [Google Scholar]
  92. Knight Foundation. (2022, January 6). Free Expression in America Post-2020. Knight Foundation. https://knightfoundation.org/reports/free-expression-in-america-post-2020/. Accessed 10 Sep 2022.
  93. Kojan, Lilian, Osmanbeyoglu, Hava Melike, Burbach, Laura, Ziefle, Martina, & Calero Valdez, André. (2020). Defend Your Enemy. A Qualitative Study on Defending Political Opponents Against Hate Speech Online. In Max van Duijn, Mike Preuss, Viktoria Spaiser, Frank Takes, & Suzan Verberne (Eds.), Disinformation in Open Online Media (pp. 80–94). Springer International Publishing.
  94. Kruse LM, Norris DR, Flinchum JR. Social Media as a Public Sphere? Politics on Social Media. The Sociological Quarterly. 2018;59(1):62–84. doi: 10.1080/00380253.2017.1383143. [DOI] [Google Scholar]
  95. Lane, Justin E., McCaffree, Kevin, & Shults, F. LeRon. (2021). Is radicalization reinforced by social media censorship? (arXiv:2103.12842). arXiv.
  96. Lazer DMJ, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D, Schudson M, Sloman SA, Sunstein CR, Thorson EA, Watts DJ, Zittrain JL. The science of fake news. Science. 2018;359(6380):1094–1096. doi: 10.1126/science.aao2998. [DOI] [PubMed] [Google Scholar]
  97. Leonard NE, Lipsitz K, Bizyaeva A, Franci A, Lelkes Y. The nonlinear feedback dynamics of asymmetric political polarization. Proceedings of the National Academy of Sciences. 2021;118(50):e2102149118. doi: 10.1073/pnas.2102149118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Levendusky M. Partisan Media Exposure and Attitudes Toward the Opposition. Political Communication. 2013;30(4):565–581. doi: 10.1080/10584609.2012.737435. [DOI] [Google Scholar]
  99. Levin SA, Milner HV, Perrings C. The dynamics of political polarization. Proceedings of the National Academy of Sciences. 2021;118(50):e2116950118. doi: 10.1073/pnas.2116950118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Lim, Soo Ling, & Bentley, Peter J. (2022). Opinion amplification causes extreme polarization in social networks. Scientific Reports, 12(1), 1. [DOI] [PMC free article] [PubMed]
  101. Lin, Fangshi, Teo, Hock-Hai, & Zhai, Yingda. (2022). Breaking Online Tribalism: Motivated Reasoning, Empathy and Polarization. ICIS 2022 Proceedings. https://aisel.aisnet.org/icis2022/soc_impact_is/soc_impact_is/5.
  102. Liu, S., & Wang, D. (2021). Censorship: State control of expression. In The Routledge Handbook of Law and Society, 86-89. Routledge. https://www.taylorfrancis.com/chapters/edit/10.4324/9780429293306-15/censorship-sida-liu-di-wang.
  103. Liu Y, Mezei J, Kostakos V, Li H. Applying configurational analysis to IS behavioural research: A methodological alternative for modelling combinatorial complexities. Information Systems Journal. 2017;27(1):59–89. doi: 10.1111/isj.12094. [DOI] [Google Scholar]
  104. Lyon, David. (2007). Surveillance Studies: An Overview. Polity.
  105. MacCabe JH, Brébion G, Reichenberg A, Ganguly T, McKenna PJ, Murray RM, David AS. Superior intellectual ability in schizophrenia: Neuropsychological characteristics. Neuropsychology. 2012;26(2):181–190. doi: 10.1037/a0026376. [DOI] [PubMed] [Google Scholar]
  106. Margetts, Helen. (2013). The Internet and Democracy. In William H. Dutton (Ed.), The Oxford Handbook of Internet Studies (p. 0). Oxford University Press.
  107. Marwick A. The Public Domain: Surveillance in Everyday Life. Surveillance & Society. 2012;9(4):378–393. doi: 10.24908/ss.v9i4.4342. [DOI] [Google Scholar]
  108. Matamoros-Fernández A, Farkas J. Racism, Hate Speech, and Social Media: A Systematic Review and Critique. Television & New Media. 2021;22(2):205–224. doi: 10.1177/1527476420982230. [DOI] [Google Scholar]
  109. McCoy J, Rahman T, Somer M. Polarization and the Global Crisis of Democracy: Common Patterns, Dynamics, and Pernicious Consequences for Democratic Polities. American Behavioral Scientist. 2018;62(1):16–42. doi: 10.1177/0002764218759576. [DOI] [Google Scholar]
  110. Mechkova, V., Pemstein, D., Seim, B., & Wilson, S. (2019). Digital Society Project Working Paper. 1.
  111. Meneguelli, G., & Ferré-Pavia, C. (2021). Hate speech and social polarization in Brazil. In J. M. Aguado (Ed.), Full Professor of Media and Communication, Mobile Media Research Lab, School of Information and Media Studies, University of Murcia, Spain Hate Speech and Polarization in Participatory Society provides a timely (p. 163). Routledge. https://www.taylorfrancis.com/chapters/oa-edit/10.4324/9781003109891-13/hate-speech-socialpolarization-brazil-gisella-meneguelli-carme-ferr%C3%A9-pavia.
  112. Merali Y. Complexity and Information Systems: The Emergent Domain. Journal of Information Technology. 2006;21(4):216–228. doi: 10.1057/palgrave.jit.2000081. [DOI] [Google Scholar]
  113. Meske, C., & Bunde, E. (2022). Design principles for user interfaces in AI-Based decision support systems: The case of explainable hate speech detection. Information Systems Frontiers, 1–31. https://link.springer.com/article/10.1007/s10796-021-10234-5.
  114. Muñoz P, Kibler E, Mandakovic V, Amorós JE. Local entrepreneurial ecosystems as configural narratives: A new way of seeing and evaluating antecedents and outcomes. Research Policy. 2022;51(9):104065. doi: 10.1016/j.respol.2020.104065. [DOI] [Google Scholar]
  115. Neyazi TA. Digital propaganda, political bots and polarized politics in India. Asian Journal of Communication. 2020;30(1):39–57. doi: 10.1080/01292986.2019.1699938. [DOI] [Google Scholar]
  116. Noelle-Neumann, Elisabeth. (1993). The Spiral of Silence: Public Opinion--Our Social Skin (2d edition, Ed.). University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/S/bo3684069.html. Accessed 10 Sep 2022.
  117. Oh, Wonseok, Acquisti, Alessandro, & Sia, Choon Ling. (2018). ICT Challenges and Opportunities in Building a “Bright Society.” Journal of the Association for Information Systems, 19(2). https://aisel.aisnet.org/jais/vol19/iss2/4.
  118. Olan, F., Jayawickrama, U., Arakpogun, E. O., Suklan, J., & Liu, S. (2022). Fake news on social media: the Impact on Society. Information Systems Frontiers, 1-16. https://link.springer.com/article/10.1007/s10796-022-10242-z. [DOI] [PMC free article] [PubMed]
  119. Olya HGT, Mehran J. Modelling tourism expenditure using complexity theory. Journal of Business Research. 2017;75:147–158. doi: 10.1016/j.jbusres.2017.02.015. [DOI] [Google Scholar]
  120. O’Regan, Catherine, & Theil, Stefan. (2020, February 26). Hate speech regulation on social media: An intractable contemporary challenge. Research Outreach. https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/. Accessed 13 Feb 2023.
  121. Ovide, Shira. (2022, August 4). Classic Internet Censorship. The New York Times. https://www.nytimes.com/2022/08/04/technology/internet-censorship.html. Accessed 6 Sep 2022.
  122. Oxford Internet Institute. (2021, January 13). Social media manipulation by political actors now an industrial scale problem prevalent in over 80 countries – annual Oxford report. https://www.oii.ox.ac.uk/news-events/news/social-media-manipulation-by-political-actors-now-an-industrial-scale-problem-prevalent-in-over-80-countries-annual-oxford-report. Accessed 6 Sep 2022.
  123. Papacharissi Z. Democracy online: Civility, politeness, and the democratic potential of online political discussion groups. New Media & Society. 2004;6(2):259–283. doi: 10.1177/1461444804041444. [DOI] [Google Scholar]
  124. Pappas, I, Giannakos, M, & Sampson, D. (2016a). Making Sense of Learning Analytics with a Configurational Approach. pp. pp. 42–52. https://ceur-ws.org/Vol-1579/paper7.pdf
  125. Pappas IO, Giannakos MN, Sampson DG. Fuzzy set analysis as a means to understand users of 21st-century learning systems: The case of mobile learning and reflections on learning analytics research. Computers in Human Behavior. 2019;92:646–659. doi: 10.1016/j.chb.2017.10.010. [DOI] [Google Scholar]
  126. Pappas IO, Kourouthanassis PE, Giannakos MN, Chrissikopoulos V. Explaining online shopping behavior with fsQCA: The role of cognitive and affective perceptions. Journal of Business Research. 2016;69(2):794–803. doi: 10.1016/j.jbusres.2015.07.010. [DOI] [Google Scholar]
  127. Pappas IO, Papavlasopoulou S, Mikalef P, Giannakos MN. Identifying the combinations of motivations and emotions for creating satisfied users in SNSs: An fsQCA approach. International Journal of Information Management. 2020;53:102128. doi: 10.1016/j.ijinfomgt.2020.102128. [DOI] [Google Scholar]
  128. Pappas IO, Woodside AG. Fuzzy-set Qualitative Comparative Analysis (fsQCA): Guidelines for research practice in Information Systems and marketing. International Journal of Information Management. 2021;58:102310. doi: 10.1016/j.ijinfomgt.2021.102310. [DOI] [Google Scholar]
  129. Parthasarathy B, Ramamritham K. Guest editorial: Information and communications technology for development. Information Systems Frontiers. 2009;11(2):103–104. doi: 10.1007/s10796-009-9163-2. [DOI] [Google Scholar]
  130. Patkós V. Measuring partisan polarization with partisan differences in satisfaction with the government: The introduction of a new comparative approach. Quality & Quantity. 2023;57(1):39–57. doi: 10.1007/s11135-022-01350-8. [DOI] [Google Scholar]
  131. Paz MA, Montero-Díaz J, Moreno-Delgado A. Hate Speech: A Systematized Review. SAGE Open. 2020;10(4):2158244020973022. doi: 10.1177/2158244020973022. [DOI] [Google Scholar]
  132. Pemstein, D., Marquardt, K. L., Tzelgov, E., Wang, Y., Medzihorsky, J., Krusell, J., Miri, F., & von Römer, J (2021). “The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data”. V-Dem Working Paper No. 21 (6th ed.). University of Gothenburg: Varieties of Democracy Institute.
  133. Pemstein, Daniel, Marquardt, Kyle L., Tzelgov, Eitan, Wang, Yi-ting, Medzihorsky, Juraj, Krusell, Joshua, Miri, Farhad, & von Römer, Johannes. (2022). “The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data”. V-Dem Working Paper No. 21 (7th ed.). University of Gothenburg: Varieties of Democracy Institute.
  134. Pentney, Katie. (2022). Tinker, Tailor, Twitter, Lie: Government Disinformation and Freedom of Expression in a Post-Truth Era. Human Rights Law Review, 22(2), ngac009.
  135. Piazza JA. Politician hate speech and domestic terrorism. International Interactions. 2020;46(3):431–453. doi: 10.1080/03050629.2020.1739033. [DOI] [Google Scholar]
  136. Piazza JA. Fake news: The effects of social media disinformation on domestic terrorism. Dynamics of Asymmetric Conflict. 2022;15(1):55–77. doi: 10.1080/17467586.2021.1895263. [DOI] [Google Scholar]
  137. Polyakova, Alina, & Fried, Daniel. (2020, December 2). Democratic Offense Against Disinformation. CEPA. https://cepa.org/comprehensive-reports/democratic-offense-against-disinformation/. Accessed 21 Dec 2022.
  138. Qureshi, I., Bhatt, B., Gupta, S., & Tiwari, A. A. (2020). Causes, symptoms and consequences of social media induced polarization (SMIP). Information Systems Journal, 11. https://onlinelibrary.wiley.com/pb-assets/assets/13652575/ISJ_SMIP_CFP.pdf.
  139. Qureshi, Israr, Bhatt, Babita, Gupta, Samrat, & Tiwari, Amit Anand. (2022). Introduction to the Role of Information and Communication Technologies in Polarization. In Israr Qureshi, Babita Bhatt, Samrat Gupta, & Amit Anand Tiwari (Eds.), Causes and Symptoms of Socio-Cultural Polarization: Role of Information and Communication Technologies (pp. 1–23). Springer.
  140. Ragin, C., & Davey, S. (2016). Fuzzy-Set/Qualitative Comparative Analysis (3.0).
  141. Ragin CC. Fuzzy-Set Social Science. University of Chicago Press; 2000. [Google Scholar]
  142. Ragin, Charles C. (2008). Redesigning Social Inquiry: Fuzzy Sets and Beyond. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/R/bo5973952.html.
  143. Rasoolimanesh SM, Ringle CM, Sarstedt M, Olya H. The combined use of symmetric and asymmetric approaches: Partial least squares-structural equation modeling and fuzzy-set qualitative comparative analysis. International Journal of Contemporary Hospitality Management. 2021;33(5):1571–1592. doi: 10.1108/IJCHM-10-2020-1164. [DOI] [Google Scholar]
  144. Richards, Diana Eva-Ann, & Doyle, Diana Richards. (2000). Political Complexity: Nonlinear Models of Politics. University of Michigan Press.
  145. Richards NM. The dangers of surveillance. Harvard Law Review. 2013;126(7):1934–1965. [Google Scholar]
  146. Riemer K, Peter S. Algorithmic audiencing: Why we need to rethink free speech on social media. Journal of Information Technology. 2021;36(4):409–426. doi: 10.1177/02683962211013358. [DOI] [Google Scholar]
  147. Rihoux, B, & Ragin, C. (2008). Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques. Sage Publications. https://methods.sagepub.com/book/configurationalcomparative-methods.
  148. Rosas, O. V, & Serrano-Puche, J. (2018). News media and the emotional public sphere—introduction.. International Journal of Communication, 12(9). https://ijoc.org/index.php/ijoc/article/view/6785.
  149. RTÉ. (2022). Biden says hate speech given “too much oxygen” online. RTÉ. https://www.rte.ie/news/us/2022/0916/1323692-biden-hate-speech/. Accessed 18 Sep 2022.
  150. Ruohonen, Jukka. (2021). A Comparative Study of Online Disinformation and Offline Protests (arXiv:2106.11000). arXiv.
  151. Saha, Manika, Varghese, Delvin, Bartindale, Tom, Thilsted, Shakuntala Haraksingh, Ahmed, Syed Ishtiaque, & Olivier, Patrick. (2022). Towards Sustainable ICTD in Bangladesh: Understanding the Program and Policy Landscape and Its Implications for CSCW and HCI. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 126,1–126:31.
  152. Sawy El, Omar A, Malhotra A, Park YoungKi, Pavlou PA. Research Commentary—Seeking the Configurations of Digital Ecodynamics: It Takes Three to Tango. Information Systems Research. 2010;21(4):835–848. doi: 10.1287/isre.1100.0326. [DOI] [Google Scholar]
  153. Schimpfössl E, Yablokov I, Zeveleva O, Fedirko T, Bajomi-Lazar P. Self-censorship narrated: Journalism in Central and Eastern Europe. European Journal of Communication. 2020;35(1):3–11. doi: 10.1177/0267323119897801. [DOI] [Google Scholar]
  154. Schneider, Carsten Q., & Wagemann, Claudius. (2012). Set-Theoretic Methods for the Social Sciences: A Guide to Qualitative Comparative Analysis. Cambridge University Press.
  155. Serrano-Puche J. Digital disinformation and emotions: Exploring the social risks of affective polarization. International Review of Sociology. 2021;31(2):231–245. doi: 10.1080/03906701.2021.1947953. [DOI] [Google Scholar]
  156. Shahbaz, Adrian, & Funk, Allie. (2019). Social Media Surveillance. Freedom House. https://freedomhouse.org/report/freedom-on-the-net/2019/the-crisis-of-social-media/social-media-surveillance. Accessed 13 Feb 2023.
  157. Shahbaz, Adrian, & Funk, Allie. (2021). Freedom on the Net 2021: The Global Drive to Control Big Tech. Freedom House. https://freedomhouse.org/report/freedom-net/2021/global-drive-control-big-tech. Accessed 18 Sep 2022.
  158. Shane, Scott. (2017, September 7). The Fake Americans Russia Created to Influence the Election. The New York Times. https://www.nytimes.com/2017/09/07/us/politics/russia-facebook-twitter-election.html. Accessed 18 Sep 2022.
  159. Shu K. Combating disinformation on social media: A computational perspective. BenchCouncil Transactions on Benchmarks, Standards and Evaluations. 2022;2(1):100035. doi: 10.1016/j.tbench.2022.100035. [DOI] [Google Scholar]
  160. Soares, Felipe Bonow, & Recuero, Raquel. (2021). Hashtag Wars: Political Disinformation and Discursive Struggles on Twitter Conversations During the 2018 Brazilian Presidential Campaign. Social Media + Society, 7(2), 20563051211009070.
  161. Sparkman, Gregg, Geiger, Nathan, & Weber, Elke U. (2022). Americans experience a false social reality by underestimating popular climate policy support by nearly half. Nature Communications, 13(1), 1. [DOI] [PMC free article] [PubMed]
  162. Staples, William G. (2014). Everyday Surveillance: Vigilance and Visibility in Postmodern Life. Rowman & Littlefield.
  163. Stewart, Alexander J., McCarty, Nolan, & Bryson, Joanna J. (2020). Polarization under rising inequality and economic decline. Science Advances, 6(50), eabd4201. [DOI] [PMC free article] [PubMed]
  164. Stokel-Walker, Chris. (2019, October 1). The EU doesn’t really have a plan to stop its elections being hacked. Wired UK. https://www.wired.co.uk/article/eu-parliament-elections-hacking. Accessed 22 Dec 2022.
  165. Stuenkel, Oliver. (2021, February 17). Brazil’s Polarization and Democratic Risks—Divisive Politics and Democratic Dangers in Latin America. Carnegie Endowment for International Peace. https://carnegieendowment.org/2021/02/17/brazil-s-polarization-and-democratic-risks-pub-83783. Accessed 6 Sep 2022.
  166. Sunstein, Cass R. (2018). #Republic. https://press.princeton.edu/books/paperback/9780691180908/republic. Accessed 10 Sep 2022.
  167. Tandoc EC, Lim ZW, Ling R. Defining “Fake News”: A typology of scholarly definitions. Digital Journalism. 2018;6(2):137–153. doi: 10.1080/21670811.2017.1360143. [DOI] [Google Scholar]
  168. Tennant A, Penta M, Tesio L, Grimby G, Thonnard J-L, Slade A, Lawton G, Simone A, Carter J, Lundgren-Nilsson Å, Tripolski M, Ring H, Biering-Sørensen F, Marincek Č, Burger H, Phillips S. Assessing and Adjusting for Cross-Cultural Validity of Impairment and Activity Limitation Scales through Differential Item Functioning within the Framework of the Rasch Model: The PRO-ESOR Project. Medical Care. 2004;42(1):I37–I48. doi: 10.1097/01.mlr.0000103529.63132.77. [DOI] [PubMed] [Google Scholar]
  169. The Express Tribune. (2022, September 4). Nobel laureates call for action on online disinformation. The Express Tribune. http://tribune.com.pk/story/2374780/nobel-laureates-call-for-action-on-online-disinformation. Accessed 6 Sep 2022.
  170. Timofeev, I. N. (2014). World Order Or World Anarchy? A Look at the Modern System of International Relations (p. 44). https://russiancouncil.ru/upload/RIAC_WP_18_EN.pdf.
  171. Törnberg P. How digital media drive affective polarization through partisan sorting. Proceedings of the National Academy of Sciences. 2022;119(42):e2207159119. doi: 10.1073/pnas.2207159119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  172. Trottier, Daniel. (2016). Social Media as Surveillance: Rethinking Visibility in a Converging World. Routledge.
  173. United Nations. (2023, January 28). Hate speech: A growing, international threat | UN News. https://news.un.org/en/story/2023/01/1132597. Accessed 13 Feb 2023.
  174. Vese D. Governing Fake News: The Regulation of Social Media and the Right to Freedom of Expression in the Era of Emergency. European Journal of Risk Regulation. 2022;13(3):477–513. doi: 10.1017/err.2021.48. [DOI] [Google Scholar]
  175. Vicario, Michela Del, Quattrociocchi, Walter, Scala, Antonio, & Zollo, Fabiana. (2019). Polarization and Fake News: Early Warning of Potential Misinformation Targets. ACM Transactions on the Web, 13(2), 10:1–10:22.
  176. Vraga EK, Bode L. Correction as a Solution for Health Misinformation on Social Media. American Journal of Public Health. 2020;110(S3):S278–S280. doi: 10.2105/AJPH.2020.305916. [DOI] [PMC free article] [PubMed] [Google Scholar]
  177. Waldron J. The Harm in Hate Speech. Harvard University Press; 2012. [Google Scholar]
  178. WEF. (2023, January 11). Global Risks Report 2023. World Economic Forum. https://www.weforum.org/reports/global-risks-report-2023/digest/. Accessed 14 Feb 2023.
  179. Westcott H, Owen S. Friendship and trust in the social surveillance network. Surveillance & Society. 2013;11(3):311–323. doi: 10.24908/ss.v11i3.4502. [DOI] [Google Scholar]
  180. Woodside AG. Embrace•perform•model: Complexity theory, contrarian case analysis, and multiple realities. Journal of Business Research. 2014;67(12):2495–2503. doi: 10.1016/j.jbusres.2014.07.006. [DOI] [Google Scholar]
  181. Woodside AG. The Complexity Turn: Cultural, Management, and Marketing Applications. Springer; 2017. [Google Scholar]
  182. Woszczynski, A. B., & Whitman, M. E. (2004). The problem of common method variance in IS research. In The handbook of Information Systems Research. Igi Global, 66–77. https://www.igiglobal.com/gateway/chapter/30343.
  183. Zahidi, Saadia. (2023, January 12). Solving Global Crises Requires A Return To Basics | Time. Time. https://time.com/6246974/worlds-biggest-crises-protect-basic-needs/. Accessed 14 Feb 2023.
  184. Zappin A, Malik H, Shakshuki EM, Dampier DA. YouTube Monetization and Censorship by Proxy: A Machine Learning Prospective. Procedia Computer Science. 2022;198:23–32. doi: 10.1016/j.procs.2021.12.207. [DOI] [Google Scholar]
  185. Zhai Y, Liu Y, Yang S, Song C. A dual process model of popular support for Internet surveillance. Personality and Individual Differences. 2022;187:111409. doi: 10.1016/j.paid.2021.111409. [DOI] [Google Scholar]
  186. Zhu, Gaoming. (2019). Polarized China: The Effect of Media Censorship on People’s Ideology. Res Publica - Journal of Undergraduate Research, 24(1). https://digitalcommons.iwu.edu/respublica/vol24/iss1/12.
  187. Zimdars, Melissa, & Mcleod, Kembrew. (2020). Fake News: Understanding Media and Misinformation in the Digital Age. MIT Press.

Articles from Information Systems Frontiers are provided here courtesy of Nature Publishing Group

RESOURCES