Abstract
Information and communication technologies hold immense potential to enhance our lives and societal well-being. However, digital spaces have also emerged as a fertile ground for fake news campaigns and hate speech, aggravating polarization and posing a threat to societal harmony. Despite the fact that this dark side is acknowledged in the literature, the complexity of polarization as a phenomenon coupled with the socio-technical nature of fake news necessitates a novel approach to unravel its intricacies. In light of this sophistication, the current study employs complexity theory and a configurational approach to investigate the impact of diverse disinformation campaigns and hate speech in polarizing societies across 177 countries through a cross-country investigation. The results demonstrate the definitive role of disinformation and hate speech in polarizing societies. The findings also offer a balanced perspective on internet censorship and social media monitoring as necessary evils to combat the disinformation menace and control polarization, but suggest that such efforts may lend support to a milieu of hate speech that fuels polarization. Implications for theory and practice are discussed.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Information and communication technologies (ICT) hold immense potential for people and societal advancement in a multitude of avenues (Bentley et al., 2019; Parthasarathy & Ramamritham, 2009; Saha et al., 2022). However, on the flipside, ICTs also constitute a threat to society (Ahmed et al., 2022), and digital spaces have been accused of providing a fertile ground for fake news campaigns and online hate speech (Aanestad et al., 2021; Castaño-Pulgarín et al., 2021), while exacerbating societal polarization (Törnberg, 2022). Polarization is a social phenomenon characterized by the fragmentation of society into antagonistic factions with vehemently opposed values and identities that impede cooperation and the pursuit of a common good (Stewart et al., 2020). The threat is significant, and democracies globally are under siege and reeling from the impact of polarized societies. According to the latest World Economic Forum study on global risks, societal polarization is not just one of the world’s top 10 present concerns, but also a long-term threat over the next decade (Zahidi, 2023). The phenomenon has impeded pandemic response in countries (Stuenkel, 2021), slowed consensus on critical global issues such as climate change (Sparkman et al., 2022), and continues to challenge the resilience of societies, resulting in a catastrophic loss of diversity in the social fabric (Kelly, 2021). Not only does polarization increase the likelihood of violence in societies (Piazza, 2022), but violent demonstrations may exacerbate polarization and divide individuals along partisan lines (De Salle, 2020). These devastating effects of polarization have catapulted it to a top priority for scholars across disciplines seeking an understanding of the causes of this social evil (Stewart et al., 2020).
Information technology and social media, in particular, play a crucial role in fostering polarization (Arora et al., 2022; Qureshi et al., 2022), with a spillover effect on offline violence and protests (Gallacher et al., 2021). These platforms are not merely spaces of public discourse, but also raise concerns over free speech, particularly when they contribute to the online amplification of the most radical and polarizing content (Riemer & Peter, 2021). Debates on free speech on platforms predominantly revolve around disinformation and hate speech (Riemer & Peter, 2021), both of which are considered to exacerbate polarization (Meneguelli & Ferré-Pavia, 2021; WEF, 2023). While governments and platforms pursue content moderation and censorship to combat this menace of fake news and hate speech (Riemer & Peter, 2021; Vese, 2022), such practices are regarded as a threat to free speech (Vese, 2022) and contribute to polarization (Frasz, 2022).
The role of fake news and hate speech in polarizing societies is widely acknowledged in prior literature (Au et al., 2021; Piazza, 2020). Its crippling effects are now accompanied by rising calls for action on online disinformation, which threatens to polarize society and destroy democratic life (The Express Tribune, 2022). Fake news as a term gained prominence during the U.S. presidential elections of 2016 (Allcott & Gentzkow, 2017). Ever since, the issue has persisted and poses a grave threat to the societal fabric (Olan et al., 2022). The term overlaps with other forms of information disorder such as misinformation and disinformation (Lazer et al., 2018). While the former refers to false or misleading information (Lazer et al., 2018), the latter is a deliberate fabrication and dissemination of falsehood with an intent to deceive (Zimdars & Mcleod, 2020). In contrast to falsehood, hate speech refers to the use of offensive language directed against a specific group of individuals who share a common characteristic (Castaño-Pulgarín et al., 2021).
Politics remains a favored breeding ground for disinformation and hate speech (Freelon & Wells, 2020; Piazza, 2020). While social media and online platforms have played a pivotal role in amplifying the spread of disinformation and hate speech (Meske & Bunde, 2022; Olan et al., 2022), recent reports reveal the rampant misuse of these platforms by governments and political parties in over eighty countries engaged in political disinformation campaigns (Alba & Satariano, 2019; Oxford Internet Institute, 2021). Likewise there are global concerns over hate speech “super-charged by the internet” in allowing falsehood and conspiracy theories to proliferate and provoke offline violence (United Nations, 2023).
While social networking sites (SNSes) have started to ramp up efforts to detect and curb this menace (Hatmaker, 2022; O’Regan & Theil, 2020), governments have also stepped up measures to actively monitor social media using sophisticated technology that enables them to monitor citizens’ behaviors on these platforms (Shahbaz & Funk, 2019). Some governments have pushed the envelope further by imposing censorship on the internet (Ovide, 2022). Such measures challenge the freedom of expression online (González-Quiñones & Machin-Mastromatteo, 2019) and contribute to polarization and political animosity among citizens (Frasz, 2022).
In light of the fact that the discourse on free speech online is embroiled in a conflict with surveillance and censorship efforts, even as disinformation and hate speech proliferate on these platforms, we assert that these core factors in the free speech debate merit exploration regarding their combinatorial effect on polarization as a phenomenon. This significance stems from the fact that extant literature has predominantly engaged in linear investigations of polarization as a phenomenon. In this context, several studies have analyzed the relationship between polarization and factors such as internet and social media usage (Boxell et al., 2017), democratic erosion and media censorship (Arbatli & Rosenberg, 2021), media exposure (Hawdon et al., 2020), fake news (Piazza, 2022), and hate speech (Piazza, 2020). However, the vast majority of studies have examined factors in isolation and assumed linear relationships between phenomena.Footnote 1 Political processes are predominantly non-linear in nature (Richards & Doyle, 2000; Timofeev, 2014), and recent literature emphasizes the importance of non-linearity in appreciating aspects that might explain and alleviate polarization (Kelly, 2021; Leonard et al., 2021).
Polarization is characterized by asymmetry and constitutes a collection of complex, ever-evolving systems, the complexity of which necessitates a novel approach to unravel its intricacies (Kelly, 2021; Leonard et al., 2021; Levin et al., 2021). In addition, the socio-technical character of fake news makes standard methods of investigation intrinsically complex (Au et al., 2021). In light of this complexity, this paper acknowledges the phenomenon’s sophistication and leans on complexity theory and configurational analysis in an effort to decipher the causal pathways that influence societal polarization. In particular, this study seeks to illustrate how various sources of political disinformation, hate speech, social media monitoring efforts, and online censorship influence the degree of polarization in societies. In lieu of focusing on the main effects between polarization and its causes, the current study seeks to identify the configurations that explain societal polarization. Consequently, this study addresses the following research question:
-
RQ: What configurations of political disinformation, hate speech, social media monitoring, and internet censorship lead to polarization in societies?
This configural narrative towards the conceptualization of polarization lends credence to recent assertions regarding the impact of fake news in fracturing societies and intensifying polarization. Specifically, the study’s findings point to a balanced view of surveillance and censorship by governments, which may be useful in combating disinformation campaigns on social media and controlling polarization, although such efforts may lend support to a milieu of hate speech that fuels polarization. Additionally, identification of configurations as causal pathways to polarization should serve as crucial insights to social media platform players in recognizing the varying manifestations of disinformation, hate speech and monitoring efforts to deal with societal polarization, while the findings of this study should also assist political scientists and complex systems theorists with a more nuanced understanding of the dynamics of polarization in society.
The paper is structured as follows. In the next section, we review the literature on polarization, political disinformation, hate speech, internet censorship, and social media monitoring and develop the conceptual model based on complexity theory which serves as the theoretical foundation for the study. Section 3 discusses the research methodology and measures utilized for analysis. Section 4 describes the process of data analysis using the configurational approach. Section 5 analyzes the empirical results, and Section 6 discusses the implications to theory and practice. Section 7 highlights limitations alongside discussing future research directions before concluding in Section 8.
2 Theoretical Background and Conceptual Framework
This section reviews literature related to the key constructs (see Table 11 in the Appendix for details for details) and discusses the theoretical foundations for the study.
2.1 Polarization of Society
Polarization refers to a widening gulf concerning perspectives on various political issues between societal groups (Enders & Armaly, 2019) and represents the distance between opposing political orientations (Kearney, 2019). While some degree of polarization is observed in the majority of pluralistic democracies and can encourage political engagement, a high degree of polarization can lead to estrangement between social groupings and impede democratic dialogue (McCoy et al., 2018). In case of extreme polarization, a society’s typical diversity of distinctions rapidly aligns along a single axis, and individuals increasingly view and describe politics and society in terms of split factions (McCoy et al., 2018). Extreme polarization tends to have a large affective component (Iyengar et al., 2012), and poses a grievous threat to democracy by eroding social cohesiveness and disrupting political stability (McCoy et al., 2018).
Polarization poses challenges that extend well beyond its impacts on political stability and democratic discourse. In recent years, polarization is believed to have hindered the public health response in countries (de Bruin et al., 2020), abetted by disinformation surrounding the pandemic (Casola et al., 2022). In a similar vein, polarization has resulted in divergent perspectives on major issues such as climate change (Hai & Perlman, 2022), domestic terrorism (Piazza, 2020), and increased ubiquity of protests (Heaney, 2020). In an era of deep polarization, a central question concerns the contribution of fake news to polarization in society (Qureshi et al., 2020), while hate speech with partisan leanings is also believed to aggravate polarization (Piazza, 2020). In a similar vein, recent studies also examine instances of social media censorship in countries and highlight that countries that engage in high levels of censorship tend to experience high degree of polarization among their citizens (Frasz, 2022). Nevertheless, despite the fact that censorship and surveillance are closely related, varying levels of these phenomena can either promote or inhibit political participation (Chan et al., 2022), a phenomenon that is closely linked to polarization in the political realm (Argyle & Pope, 2022).
Extant literature has focused primarily on linear investigations into the relationship between polarization and factors such as media censorship (Arbatli & Rosenberg, 2021), fake news (Piazza, 2022), and hate speech (Piazza, 2020). However, these inquiries have predominantly dealt with components in isolation and assumed linear relationships. Political processes are characterized by non-linearity (Richards & Doyle, 2000; Timofeev, 2014), and recent literature underscores the importance of non-linearity in comprehending factors that can explain and combat polarization (Kelly, 2021; Leonard et al., 2021). In view of the complexity of polarization as a phenomenon, understanding its intricacies necessitates an appreciation of its asymmetric nature and a perspective of it as a collection of complex systems comprising interactions between various components (Kelly, 2021; Leonard et al., 2021; Levin et al., 2021). Towards addressing this complexity, we first delve deeper into each of the constructs influencing polarization in the following sections.
2.2 Political Disinformation
Disinformation is regarded as false, inaccurate, or misleading information intentionally presented, disseminated, and exploited with a malicious purpose (Directorate-General for Communications Networks, 2018). The role of social media is propagating disinformation in widely acknowledged in literature (Grossman & Goldstein, 2021; Shu, 2022). Research suggests that online falsehood has resulted in ideologically polarized societies (Au et al., 2021). While producers of disinformation seek to alter readers’ perceptions of issues and influence their opinions and actions (Allcott & Gentzkow, 2017), the production of polarizing news also serves commercial interests and helps generate revenue (Tandoc et al., 2018). Although polarized audiences may be motivated to watch news of a partisan nature (Arceneaux & Johnson, 2013), exposure to such partisan news exacerbates the polarization that already prevails (Levendusky, 2013). Disinformation may exploit affective triggers to attract the reader’s attention, using emotionally provocative content to elicit strong feelings of outrage in users and subsequent virality on digital platforms (Serrano-Puche, 2021). Such mobilization of strong emotions is also associated with the rise of ideological polarization in the public sphere (Rosas & Serrano-Puche, 2018).
A wide range of actors may engage in disinformation warfare online. While the modal actors engaged in disinformation campaigns are generally the government ministries in authoritarian regimes, these actors are predominantly the political parties in democratic states (Bradshaw & Howard, 2018). While the former’s disinformation campaigns function alongside efforts to secure cyberinfrastructure and content as well as exertion of pressure on the regulation of cyberspace, the latter focuses primarily on targeting the domestic audience during the elections or major political events (Bradshaw & Howard, 2018). In this context, governments may employ disinformation as a tool for withholding information, which helps them maintain monopoly over information and diminishes society’s power to hold them accountable for inaction when the state controls access to information (Pentney, 2022).
Although the diverse interests outlined here are limited to the government and political parties, foreign adversaries may also have vested interests in intervening in the affairs of another nation. Foreign governments may employ clandestine psychological warfare on social media for a range of persuasion purposes, ranging from influencing users’ attitudes about certain politicians or policies to undermining trust in the government (Arnold et al., 2021). For instance, Russia has been accused of engaging in propaganda in the run-up to the 2016 U.S. presidential election (de Groot, 2022), and Iranian state actors are said to have expanded their disinformation campaign on social media in order to sow dissension inside the U.S. (Bennett, 2021).
The discussion thus far reveals a complex web of disinformation operations by a range of players, including the state, political parties, and foreign governments. However, few studies to date have examined all of these campaigns in tandem (Ruohonen, 2021). In particular, the extent of influence that each type of disinformation wields on societal polarization remains largely underexplored in the extant literature.
2.3 Hate Speech
Hate speech refers to abusive or hostile remarks intended to demean a segment of the population on the basis of their actual or perceived innate characteristics (Cohen-Almagor, 2014). Continuous exposure to hate speech can have crippling consequences on an individual’s behavioral and emotional response and breeds intergroup contempt as a result of the pejorative rhetoric that hate speech carries (Bilewicz & Soral, 2020). Hate speech can result in a deterioration of empathy and social norms, causing society to become more receptive to offensive language and less tolerant of communities.
The repercussions of hate speech include fostering an environment of prejudice and intolerance, promoting acts of violence (Iginio et al., 2015), incivility, denying individuals their personal liberties, and stereotyping certain social groups (Papacharissi, 2004). Politicians who incite hate speech against specific ethnic or religious groups in society make those groups’ members more allied with other members of their own group and less tolerant of individuals from rival factions (Piazza, 2020). In this regard, hate speech poses a threat to the social fabric by increasing divisiveness (Piazza, 2020). While research highlights that both hate speech and disinformation result in polarization (Kojan et al., 2020), it can be challenging to distinguish between the two when political discourse becomes increasingly infused with hate (Stokel-Walker, 2019). In light of this blurring distinction between the two phenomena, we contend that the interactions between hate speech and disinformation toward influencing polarization merit scrutiny.
2.4 Internet Censorship and Social Media Monitoring
Censorship involves governments’ efforts to suppress the free flow of ideas and information that deviates from the status quo and delineate which communications are acceptable and unacceptable in society (Liu & Wang, 2021). Unlike censorship, which blocks online content, social media monitoring employs surveillance technologies. Online surveillance refers to governments’ ongoing automated efforts to gather, store, and analyze digital traces of particular individuals or the entire population (Büchi et al., 2022; Chan et al., 2022). Censorship and surveillance are inextricably linked in the present day, considering that online surveillance activities may involve intercepting all outgoing internet requests and assessing whether they are for banned websites (Doctorow, 2012).
In light of the internet’s potential to enhance democracy by expanding avenues for political participation (Margetts, 2013), states’ fears and skepticism over this emancipatory potential have resulted in censorship efforts to bar users from gaining access to material from sources the authorities objected to (Busch et al., 2018). Under the guise of national security, governments have been accused of censoring the internet, and such actions are viewed as a danger to civil society in the digital age (Chang & Lin, 2020) through interference in citizens’ online freedom and access to information (Busch et al., 2018). On similar lines, online surveillance is viewed as an advanced form of coercion and social control that hinders the exercise of basic civil liberties (Richards, 2013) by limiting participation and impeding the flow of knowledge in society (Marwick, 2012; Staples, 2014; Trottier, 2016; Westcott & Owen, 2013). The looming threat of social media surveillance in societies may deter citizens from engaging in political debates and diminish the mobilizing power of online news content (Chan et al., 2022).
The spiral of silence theory offers a useful lens to unravel the influence of censorship and surveillance on societal polarization. According to the spiral of silence theory of media effects on public opinion, people opt to speak out against issues based on the climate of opinion (Noelle-Neumann, 1993). However, censorship measures by the state could also be accompanied by self-censorship. For instance, while censorship could be imposed on the journalistic press (Arsan, 2013), journalists also could self-censor to avoid political pressures (Schimpfössl et al., 2020). When individuals self-censor, the opinion climate becomes increasingly homogeneous, and a dominating viewpoint in society is reinforced (Noelle-Neumann, 1993). Similarly, when individuals are aware that their social media activity is being monitored, they tend to recalibrate their behavior (Lyon, 2007). Monitoring content on SNSes thereby leads to constrained behavior on these sites (Trottier, 2016). The monitoring inhibits the freedom of expression on these sites, as users are cautious about expressing their political leanings honestly for fear of grave consequences such as online harassment and its reverberating effects on their work and personal lives (Kruse et al., 2018).
In light of this restrictive nature of publicly articulated viewpoints, public discourse tends to be choked by a spiral of silence that threatens democratic systems (Dahlgren, 2002, 2005) and leads to polarization (Sunstein, 2018). Moreover, given the inseparability of surveillance and censorship in current times (Doctorow, 2012) and discussions of the two concepts in conjunction with one another (Zhai et al., 2022), we contend that a combined perspective could shed light on how their interaction contributes to the polarization of societies.
2.5 Complexity Theory and Configurational Analysis
Our study is conceptually grounded in the core tenets of complexity theory and configurational analysis (Rihoux & Ragin, 2008; Woodside, 2017), which aid in building a thorough comprehension of interlinkages between decisions and individuals’ actions in a networked world (Merali, 2006). The theory of complexity “suggests that the whole is greater than the sum of its parts in complex adaptive systems” (Hurlburt, 2013, p. 28) and provides an opportunity to streamline ostensibly complex systems. The theory is predominantly based on the studies of Byrne (1998) and Byrne & Callaghan (2013) and is comprised of two basic tenets which are as follows: (a) the complex context comprises an open system with transformative ability, (b) it has a pathway to accomplish a particular outcome and can be configured towards that purpose (Byrne, 2005). Hence, complex systems are created on the basis that they constitute several complex interacting elements (Frenken, 2006). The explicit modeling of elements such as “conjunctural causation, equifinality, causal asymmetry, and sufficiency” underpins configurational analysis as an approach (Fainshmidt et al., 2020, p. 456). In the setting of conjunctural causation, the outcome is dependent on a combination of causes and configurational analysis assists researchers to theorize and empirical investigate causal complexity (Fainshmidt et al., 2020).
Problems in the real world are incredibly challenging to explicate through precisely symmetrical relations between the antecedents and consequences of interest, as all real-world problems have asymmetric linkages (Ragin, 2008). This asymmetry is used to demonstrate how simple linear models cannot account for complicated interactions (Anderson, 1999). Byrne (2005) expands on this by emphasizing the relationship between the theory of complexity and configurations, which are combinations of characteristics that illuminate the pathways to the outcome of interest. In this aspect, causal complexity has a strong relationship with complexity theory and refers to circumstances in which numerous explanatory factors interact in sophisticated and, at times, divergent patterns, and manifests in equifinality (i.e., an outcome can be explained through several pathways) (Furnari et al., 2021).
In the wake of recent arguments in the IS discipline regarding the appropriateness of conventional statistical methods for modeling complex phenomena, questions have been raised around the assumptions surrounding simple causalityFootnote 2 (Delgosha et al., 2021; El Sawy et al., 2010; Liu et al., 2017). In a similar sense, concerns have been raised around the issues of non-normality of datasets, insufficient attention toward contrarian cases, and multicollinearity (Olya & Mehran, 2017). Configurational perspective is a welcome departure from these conventional approaches since it offers scholars in the IS discipline a new set of tools for data analysis, a unique theoretical base, and a holistic approach to extend and enhance our understanding of constructs (Delgosha et al., 2021).
In the context of the current study, we contend that polarization is best analyzed through the lens of complexity theory (Kelly, 2021; Levin et al., 2021). We support our assertion with two reasons. First, incorporating the principle of equifinality, the outcome of interest can be equally characterized by various sets of condition variables that combine in sufficient configurations for the outcome to occur (Fiss, 2011; Woodside, 2014). In this regard, the significance of propaganda operations in generating societal polarization is obvious from our discussion thus far (Neyazi, 2020). Despite the fact that disinformation campaigns may be executed by a range of entities, including the state, political parties, and foreign governments, few studies have analyzed all of these efforts concurrently to date (Ruohonen, 2021). In particular, the extent of influence each type of disinformation wields on polarizing societies remains largely underexplored in the extant literature. Hate speech contributes to polarization as well (Piazza, 2020), while state-controlled media with censorship efforts could also polarize societies (Zhu, 2019). Hence, configurations resulting in polarization may include combinations of disinformation emerging from the state, political parties, and foreign adversaries coupled with hate speech, censorship, and surveillance efforts by the governments.
Second, leaning on causal asymmetry, the principle contends that the outcome’s occurrence depends on how the condition variables interact with one another and not just their presence or absence (Woodside, 2014). In this regard, while hate speech poses a threat to the social fabric through a deepening of polarization (Piazza, 2020), government efforts to censor online content and monitor social media may limit publicly expressed opinions through a spiral of silence, which could threaten democratic systems (Dahlgren, 2002, 2005) and lead to polarization (Sunstein, 2018). While nationalists may adhere to state-controlled media, liberals may seek less regulated media outlets (Zhu, 2019). In this aspect, censorship creates political divisiveness and breeds extremism (Lane et al., 2021). However, citizens may fail to comprehend the significance of free speech, and certain segments may become accustomed to the stifling of other viewpoints and approve of government crackdowns on nonconformist beliefs (JMcCall, 2022). In addition, recent research also highlights citizens’ awareness of risks posed by free speech, such as disinformation (Knight Foundation, 2022). Also, while research indicates that surveillance and censorship are closely connected, they can result in differing degrees of political engagement or suppression (Chan et al., 2022). This complex concoction of phenomena opens up the possibility of these phenomena interacting in a myriad of possibilities to result in polarization.
The discussion thus far demonstrates the complexity of polarization and its causative mechanisms, while also emphasizing the novelty of configurational analysis, in which the mutual influence of the variables enables researchers to study asymmetrical condition variable-outcome relationships. In this regard, Fig. 1 exemplifies our conceptual framework by displaying the condition variables in various types of political disinformation, hate speech, censorship, and social media monitoring that predict societal polarization, our outcome of interest.
3 Methodology
3.1 Data
We rely on archival data collected for each of the condition variables and outcome of interest from the Varieties of Democracy database (V-Dem). The V-Dem project’s headquarters is based at the University of Gothenburg, Sweden in the department of political science (Coppedge et al., 2022; Pemstein et al., 2022) . Our reliance on this archival dataset is driven by two key reasons: (a) the cross-country nature of research in the current study constrains the collection of primary data on account of the enormity of efforts towards undertaking such a study (Frankfort-Nachmias & Nachmias, 2007) (b) publicly available archival data, as noted by researchers (e.g., Arayankalam & Krishnan, 2021; Jarvenpa, 1991) is accompanied by several benefits including, but not limited to easy replication (Calantone & Vickery, 2010), robustness to the threat of common method bias (Woszczynski & Whitman, 2004), etc.
Notably, the V-Dem project team comprises over 3,500 country experts who support the data collection process by providing expert information through online surveys. The project applies a rigorous process for recruiting country experts. The research team identifies between one hundred and two hundred experts in each country. To preserve objectivity, the study team also avoids prominent individuals, such as political party members. In addition, individuals from varied disciplines, including academia, the media, and the judicial system, are included to ensure a pool with diverse expertise. The project team also ensures the construct validity of instruments by conducting tests based on differential item functioning, a method for evaluating the cross-cultural validity of measures and permits the pooling of data for macro-level investigations (Arayankalam & Krishnan, 2021; Tennant et al., 2004).
Following the initial identification of a pool of potential experts for each country, basic information of each expert (e.g., educational qualifications, current position, area of documented expertise, etc.) is compiled (Coppedge et al., 2022). Prior to the completion of coding, country coordinators engage in multiple interactions with all the experts to ensure internal data consistency. The research team also ensures that at least three out of five experts for each country are natives or permanent residents (Coppedge et al., 2022). However, in case of countries where it is difficult to identify in-country experts who are qualified and independent of the prevailing regime, allowances are made to steer clear of potential western or northern biases in coding (Coppedge et al., 2022). Despite this step to minimize biases, the possibility of experts exhibiting varying levels of reliability and biases cannot be ruled out (Coppedge et al., 2022). In light of this fact, the V-Dem team employs rigorous statistical techniques to leverage patterns of cross-rater disagreements to estimate variations in reliability and biases and adjusts the estimates of the latent, indirectly observed concepts being measured (Coppedge et al., 2022). In doing so, the process enhances the confidence in reliability of the estimates provided by the team and corrects for measurement errors, if any (Coppedge et al., 2022). As a final step, to account for any remaining errors, the research team administers a post-survey questionnaire to each expert, capturing their understanding of democracy in the country and other countries with diverse characteristics of democracy or autocracy, incorporating some of these output values into the measurement model and others ex post to examine validity of the model’s output (Coppedge et al., 2022).
In summary, the process ensures that the final values offered by the V-Dem database are optimized for any potential biases and fit for empirical evaluation by ensuring high levels of validity and reliability of the data through adherence to strict protocols (Coppedge et al., 2022; Pemstein et al., 2022). The database has been utilized in studies published in reputed journals such as the British Journal of Political Science (e.g., Edgell et al., 2022) and American Journal of Political Science (e.g., Claassen, 2020) while it has also been leveraged for disinformation studies in the information systems (IS) domain (Arayankalam & Krishnan, 2021). Our dataset for performing the fuzzy-set qualitative comparative analysis (fsQCA) consists of data from 177 countries for 2021, with two countries excluded for missing data.Footnote 3 The complete list of countries included in the study is provided in Appendix A.
3.2 Condition Variables and Outcome
The outcome of interest in this study is polarization in society, and the condition variables are (a) the government’s dissemination of disinformation on social media, (b) political parties’ dissemination of disinformation on social media, (c) disinformation by foreign governments, (d) hate speech by political parties, (e) censorship of the internet and (f) monitoring of social media by the government. The condition variables related to disinformation and hate speech are derived from the measures provided by the digital society project which focuses on the political environment of the internet and social media (Mechkova et al., 2019). Throughout this article, we refer to the dissemination of disinformation to influence the population within the country as domestic disinformation. In this context, the three disinformation-related condition variables seek to understand separately the extent to which the governments, political parties, and foreign adversaries and their agents leverage social media to influence the populace and the political climate in the country. The condition related to hate speech seeks to measure how often major political parties indulge in hate speech in their rhetoric, whereas the conditions pertaining to censorship and social media monitoring quantify the extent to which governments censor information on digital media and monitor political content on SNSs. Each of these variables and outcome was operationalized using the measures from the V-Dem database (Coppedge et al., 2022; Pemstein et al., 2022). Table 1 provides a summary of the measures, their description, and the questions used to operationalize them.
To make the standardized values of the condition variables more intuitive, the notation for country score with respect to each variable was reversed, seeking guidance from prior literature which have leveraged a similar approach (Arayankalam & Krishnan, 2021; MacCabe et al., 2012). A summary of the variables and descriptive statistics is provided below (see Table 2).
4 Data Analyses and Results
4.1 Contrarian Case Analysis
When two variables are examined and an influence of one over the other is established as a main effect, it signifies that this holds true for the majority of instances in the sample (Pappas et al., 2016a). However, an association of an opposite kind may also manifest in some instances, necessitating investigation for such occurrences as well (Woodside, 2014). In this regard, contrarian analysis facilitates the detection of such relationships, hence justifying the rationale for configurational analysis (Woodside, 2014). This analysis is rarely carried out in research articles (Pappas & Woodside, 2021). Seeking guidance from Pappas et al. (2020), quintiles are created on each variable, and cross-tabulations are performed across these quintiles. Results are published in Table 3 below. Contrarian cases are denoted by bold markings, whilst the main effect is shown in italics. The presence of significant number of contrarian cases signifies the need for QCA analysis.
4.2 fsQCA Method
fsQCA is a set-theoretical method that integrates fuzzy logic and fuzzy sets with QCA (Ragin, 2000). The advantages of this approach result from the restrictions imposed by conventional regression-based techniques (Woodside, 2017). In contrast to regression-based methods where leaving out a relevant variable reduces the explanatory power, especially so if the missing variable in correlated with the included ones which results in biased estimates, QCA relies on Boolean algebra rather than correlations, thereby eliminating the issue of omitted variable bias (Fainshmidt et al., 2020). An implication of this is the “absence of a methodological requirement for control variables” when evaluating QCA-based results (Fainshmidt et al., 2020, p. 458).
The method empirically investigates the relation between the outcome and the multiple combinations of condition variables that can explain it as necessary and sufficient conditions. In this process, we first calibrate the raw data, and they analyze it for the presence of necessary conditions, which form a superset of the outcome (Ragin, 2008). This implies that the outcome’s membership score is lower when compared with the causal conditions’ fuzzy-set membership score for each of the cases in the sample being analyzed (Pappas et al., 2020). As a next step, we analyze the data and cases for a set of sufficient conditions which combine towards the outcome of interest. We then interpret the results and finally test for predictive validity of the model.
4.2.1 Data Calibration
The next step in configurational analysis is data calibration, which entails converting variable values to fuzzy set membership scores (Ragin, 2008). Calibration helps transform ordinal or interval-scale data into set membership scores through either of two methods, namely direct and indirect method of calibration. In the direct method, the researchers select three qualitative breakpoints or anchors for full membership, full non-membership, and a crossover point. As opposed to this method, the indirect method requires researchers to rescale through a qualitative assessment. The choice of method is dependent on data and the researchers’ expertise (Ragin, 2008). However, the direct method is advised, more common, and given that the choice of thresholds is clearer, it bolsters the replicability and validity of the findings and leads to more rigorous studies (Pappas & Woodside, 2021).
In line with this guidance, we employed the direct method of calibration in the current study. Towards calibration of the data, we chose the three thresholds as 0.95, 0.50, and 0.05. This results in a log-odd metric post the data transformation with values in the range of 0 to 1 (see Table 4). Seeking guidance from Pappas and Woodside (2021), we used the percentiles to compute the threshold (or breakpoint) values and used these to calibrate the data in fsQCA (see table for percentiles used as threshold values). In line with guidance from Fiss (2011), a constant value of 0.001 was added to variable conditions with values below 1 as fsQCA has a challenge evaluating cases with membership scores of exactly 0.5 (Delgosha et al., 2021; Pappas et al., 2020; Ragin, 2008). This correction ensures that none of the cases are left out of the analysis while having no effect on the results (Pappas et al., 2020).
4.2.2 Analyzing Necessary Conditions
The necessity condition analysis evaluates if an individual condition must be present for the outcome’s occurrence. This analysis examines the extent to which instances of the outcome concur in displaying the condition variable that is believed to be a required condition for the occurrence of the outcome, as well as the particular variable’s empirical relevance in terms of consistency and coverage (Muñoz et al., 2022). Elaborating further, the result of this process step implies that when an outcome is present, the condition deemed necessary is also present such that the outcome’s occurrence is contingent on the condition variable (Schneider et al., 2012). The results of the necessary condition analysis for high and low degree of polarization are presented in this section (Ragin & Davey, 2016). Conditions with a consistency value over the threshold of 0.9 are regarded as necessary conditions (Ragin, 2008). However, in the context of the current study, the necessity condition analyses demonstrate that among all the conditions, no single condition is a necessary condition for either high or low level of polarization. The results are shown in Table 5.
4.2.3 Analyzing Sufficient Conditions
As opposed to the necessary condition analysis at the level of each individual condition, sufficient condition analysis examines if a set of conditions or a particular condition is capable of producing the outcome. Post calibration of data, a truth table comprising 2 k rows was constructed, with each row representative of a possible configuration of the k condition variables. Utilizing the criteria of frequency and consistency, the truth table was further refined (Ragin, 2008). The frequency cut-off point is chosen to secure a minimum number of empirical cases for configurational evaluation.
Setting frequency thresholds for the truth table involves a trade-off wherein higher thresholds indicate more cases in the sample for each configuration but reduces the sample percentage (or coverage) explained by the configurations that are retained. In contrast, lower thresholds boost the coverage, but fewer cases in the sample are represented by each configuration. Based on guidance from prior literature (Fiss, 2011; Pappas & Woodside, 2021; Ragin, 2008), samples with cases in excess of 150 may use a frequency threshold of 3. Our sample comprises 177 cases and hence, we opt for 3 as the frequency threshold towards reduction of the truth table.
Second, in accordance with recommendations from past research, a minimum consistency threshold of 0.75 must be specified after excluding configurations with low frequency (Rihoux & Ragin, 2008). At this stage, the researcher is required to identify natural breaking points and opt for a suitable threshold with appropriate justification (Pappas & Woodside, 2021). Towards aiding the researcher in determining the threshold, fsQCA provides another measure, namely proportional reduction in consistency (PRI), which aids in preventing concurrent subset relations of configurations in the outcome and its negated state. It is recommended to set PRI consistency threshold higher than 0.5 as values below this threshold are regarded significantly inconsistent (Greckhamer et al., 2018). On this basis, we set the cut-off for PRI consistency at 0.75 and set the consistency threshold at 0.95 for high degree of polarization as the outcome. In the context of low degree of polarization as the outcome, given the lower values of raw consistency and PRI consistency (with only one configuration having a PRI consistency value above 0.75), we chose a PRI consistency threshold of 0.5 and the raw consistency threshold of 0.9.
FsQCA computes three solutions for the outcome of interest, namely the complex, parsimonious and intermediate solutions. The parsimonious solution is a simplified form of the complex solution, presenting the most important conditions that cannot be excluded from any solution and are also known as core conditions, whereas the complex solution presents all possible combinations of conditions when traditional logical operations are used (Fiss, 2011). In contrast, the intermediate solution includes the parsimonious solution and is a component of the complex solution. Additionally, whereas core conditions are present in both the intermediate and parsimonious solutions, peripheral conditions are those that are excluded in the parsimonious solution and only occur in the intermediate solution (Fiss, 2011). In this regard, core conditions exhibit a strong causal relationship with the outcome, given their presence in both the intermediate and parsimonious solutions (Ho et al., 2016).
The results are depicted in Table 6. For a configuration to be deemed sufficient, the consistency value must be greater than 0.75 (Pappas & Woodside, 2021) and the coverage value must exceed 0.2 (Rasoolimanesh et al., 2021). In the current context, all configurations (or models) confirm to this requirement and may be regarded as the set of sufficiency configurations. The analysis reveals three solutions each for high and low degree of polarization. The solutions in Table 6 explain the outcome in high degree of polarization with raw coverage indicating 33 percent to 29 percent of the cases associated with the outcome, while the outcome in low degree of polarization is explained through a range of 57 percent to 32 percent of the cases associated with it.
4.2.4 Assessing Predictive Validity
A key test of robustness of the fsQCA analysis is to examine if the model performs equally well on a different sample by assessing the predictive validity (Pappas et al., 2016b; Woodside, 2014). Although it is not frequently evaluated, this test assumes enormous significance given that a model with good fit may not always explain the outcome to expectations (Pappas et al., 2020). Seeking guidance from Pappas et al. (2019), this test follows a two-step process. First, the sample is randomly partitioned into a subsample and a holdout sample, and then the fsQCA analysis is conducted on the subsample. The results of this subsample analysis are then compared to the holdout sample. In both instances, the samples must explain the outcome to a similar extent.
The configurations comprising complex antecedent conditions are consistent indicators of the outcome of interest, i.e., high degree of polarization, with overall solution consistency and coverage of 0.948 and 0.439, respectively (see Table 7 for details). Every solution in the table below represents a model that should be plotted against the outcome of interest using the data from the holdout sample. The findings based on these plots (see figure) indicate highly consistent models with consistencies and coverage in the range of 0.935 to 0.995 and 0.244 to 0.337, respectively (see Fig. 2).
4.2.5 Supplemental Analysis
Prior political research suggests using lagged variables as a robustness check due to the dynamic nature of data (Kenny, 2020). Hence, to confirm the validity of results in the event of a lag,Footnote 4 we incorporated the V-Dem database’s 2020 data for the condition variables (Coppedge et al., 2021; Pemstein et al., 2021) while we retained the values for polarization as outcome from V-Dem’s 2021 data (Coppedge et al., 2022; Pemstein et al., 2022).
We used the direct method of data calibration. The thresholds based on this method are listed in Table 8.
The necessary condition analysis (see Table 9,) reveals that none of the conditions is a necessity in itself for the realization of the outcome.
Similar to the previous instance, in the context of sufficiency analysis (see Table 10), we set the cut-off for PRI consistency at 0.75 and set the consistency threshold at 0.95 for high degree of polarization as the outcome. In the context of low degree of polarization as the outcome, we chose a PRI consistency threshold of 0.5 and the raw consistency threshold of 0.9.
4.2.6 Interpreting the Results
The results from the primary analysis reveal six different combinations of disinformation, hate speech, internet censorship, and social media monitoring that are associated with high and low degrees of polarization. In the context of high degree of polarization, foreign disinformation and hate speech as core conditions are reinforced by complimentary ingredients in the form of disinformation by the state and political parties in two configurations combined with the absence of either one or both surveillance and censorship across the three configurations. Solutions S1H and S2H present combinations in which all forms of disinformation, namely disinformation by the state, political parties, and foreign governments and hate speech by political parties are present. In addition, censorship of internet is absent in solution S1H while monitoring of social media by government is absent in solution S2H. Solution S3H presents a combination in which disinformation peddled by foreign governments is present along with hate speech by political parties. However, all other forms of disinformation are absent along with an absence of both internet censorship and monitoring of social media by the government.
For a low degree of polarization, the absence of hate speech as a core condition common across the three configurations is reinforced by other peripheral conditions which vary across the three configurations and are discussed below. Solution S1L presents a combination where all forms of disinformation and hate speech are absent and a low degree of polarization is present regardless of internet censorship and monitoring of social media by the government. Solution S2L represents a combination wherein most conditions are absent, and the solution is indifferent to disinformation by foreign governments. Solution S3L represents a combination wherein hate speech is absent, disinformation by the state and foreign governments is present, both social media monitoring and internet censorship are present, and a low degree of polarization is present regardless of disinformation by political parties.
In summary, the results of the main analysis suggests that there exist multiple, equally effective configurations of disinformation variants that cause polarization in societies. The configurations also reveal that single condition variables may be present or absent depending on how it combines with other conditions, which is indicative of causal asymmetry. Lastly, at least one variant of disinformation is present as a condition variable across all configurations which represent high degree of polarization.
Moving on to an analysis of data with a one-year lag between condition variables and the outcome, the analysis reveals two configurations associated with a high level of polarization (S1SH and S2SH) that are comparable to those from the primary analysis (S1H and S2H) although they are distinct in terms of core and peripheral conditions. Configurations S3SH and S5SH reveal that disinformation by government and political parties and the absence of foreign disinformation operations result in a high degree of polarization while governments’ role in censorship and monitoring of social media along with hate speech in society may either be present (S5SH) or absent (S3SH). However, when all forms of disinformation are absent, censorship and monitoring of social media combined with hate speech could result in high level of polarization (S4SH). On the contrary, three configurations associated with low degree of polarization reveal the absence or minor role of most conditions (S1SL to S3SL). However, even if governments engage in disinformation operations, monitoring of social media and censorship combined with an absence of both hate speech and disinformation by political parties ensures low levels of polarization (S4SL).
5 Discussion
The current study leverages complexity theory and argues that various forms of disinformation, hate speech, government monitoring of social media, and internet censorship form configurations that predict the degree of societal fragmentation. To this end, a conceptual model that serves as the basis for identifying configurations was constructed.
The primary analysis underscores the core function of disinformation by foreign adversaries and hate rhetoric in the setting of highly polarized societies. However, these conditions may or may not be complemented by disinformation by the state and political parties, while censorship or surveillance is absent across the configurations (S1H-S3H). In contrast, the supplementary analysis related to high polarization reveals disinformation by the state and political parties emerging as more prominent through their presence across four configurations (S1SH, S2SH, S3SH, S5SH) as opposed to foreign disinformation present only across two of the configurations (S1SH, S2SH). This supports recent claims around domestic disinformation emerging as a greater threat than disinformation from foreign adversaries (Bentzen, 2021) while the possibility of blurring lines between the two disinformation variants with foreign agents engaging local actors as proxies to carry out disinformation operations (Polyakova & Fried, 2020) cannot be ruled out. In addition, the supplemental analysis reveals that hate speech remains a core condition across four configurations (S1SH, S2SH, S4SH, S5SH). A key difference is the presence of censorship and surveillance along with hate speech across two configurations (S4SH and S5SH) which highlights that hate speech’s role in fragmenting societies may be bolstered by the surveillance and censorship efforts. This aligns with recent literature highlighting that censorship may be used to suppress dissent and that hate speech laws could be misused to silence political viewpoints that are not viewed as favorable by the state (Brown & Sinclair, 2019). In this regard, we contend that while opposing viewpoints are suppressed, a certain political faction may continue to engage in hate rhetoric, and the amplification of this narrative in light of other views suppressed by censorship efforts may fuel polarization.
In the context of low polarization, hate rhetoric is absent across all configurations in the primary analysis. In addition, while disinformation dissemination by government and political parties is absent in two of the three configurations (S1L, S2L), polarization is contained even if the state and foreign adversaries engage in disinformation operations in the country (S3L). It is likely that polarization control is made possible by the prevalence of social media monitoring and internet censorship. This is further substantiated by the results of the supplementary analysis which highlights the role of surveillance and censorship in containing polarization despite states’ indulgence in disinformation operations (S4SL).
In conclusion, although the significance of propaganda in promoting societal polarization is well-established in the literature (Neyazi, 2020), the variable influence of various forms of disinformation is rarely addressed (Ruohonen, 2021). In this context, the current study emphasizes the central role of political disinformation on social media and the importance of hate speech in polarizing societies. Also, in contrast to the commonly held belief that internet censorship and social media monitoring stifle fundamental liberties (Richards, 2013; Zappin et al., 2022), this study emphasizes the role of internet censorship and social media monitoring in keeping polarization in check despite the spread of disinformation by a variety of actors while it also highlights that these surveillance and censorship measures in combination with hate speech may fuel polarization in societies.
6 Implications
6.1 Implications for Research
The findings of the study have four implications for research. First, our study offers a novel configural conceptualization of polarization by leveraging fsQCA as a method. Recent research emphasizes the intricacy surrounding polarization as a phenomenon and the fact that it consists of a collection of complex systems with interactions between its numerous components (Kelly, 2021; Levin et al., 2021). Recognizing this sophistication, the current study, leaning on complexity theory as the theoretical foundation, uses fsQCA as a set-theoretical method to unravel the interactions among causal mechanisms which combine in several ways to result in polarized societies. In doing so, we engage in an exploratory analysis of societal polarization and offer a configurational perspective of the phenomenon which suggests that combinations of conditions comprising disinformation, hate speech, censorship and surveillance explain polarization better than an individual condition in isolation. Through this configural analysis, we not only highlight the asymmetric and equifinal nature of conditions coming together to result in polarization, but we also pave the way for future studies to analyze the specifics using symmetric methods, including experiments and observational research, to enhance our understanding of polarization, while asymmetric methods may be further leveraged to expand on the conditions contributing to polarization.
Second, by demonstrating the significance of disinformation in polarizing societies with partisan factions, the study provides credence to recent claims on polarization and societal divisions as outcomes of fake news (George et al., 2021). Comparing various types of disinformation and their function in polarizing societies, the study highlights the crippling impacts of the issue in such countries. Third, the study underscores the unabated proliferation of disinformation and its far-reaching consequences. Although attention to fake news soared during the 2016 US presidential elections, during which the scourge was partially attributed to foreign players (Shane, 2017), the analysis reveals that the issue endures unabatedly. In this regard, the primary analysis highlights the continued dominance of foreign disinformation on social media. Foreign disinformation campaigns on social media are more nuanced than they are portrayed to be (Cohen et al., 2021) and the findings serve as decisive insights for political scientists to prioritize attention on foreign disinformation due to the near-certain menacing ramifications of the phenomenon. Nonetheless, the results of the supplemental analysis indicate that domestic disinformation cannot be ignored either. Fourth, the study highlights the overpowering nature of social media monitoring and censorship of the internet by governments in combating disinformation campaigns on social media. While political scientists have been vocal in accusing censorship and monitoring of social media in oppressing freedom of speech online (Shahbaz & Funk, 2021), such bans may serve as “necessary evils” in conditions when accessing social media can do more harm than good amidst a saturated environment of political disinformation on social media websites. Censorship may help combat such campaigns (Golovchenko, 2022), and the insights from this study serve as a call for political scientists to broaden their horizons and take a balanced perspective on the harms and benefits of online censorship and monitoring efforts. The supplemental analysis indicates, however, that such perspectives must proceed with caution, since censorship and monitoring measures in a setting of hate speech may reinforce the hate rhetoric and aggravate polarization.
6.2 Implications for IS Research
There are three specific implications of this work for IS research. First, the findings provide IS scholars insights into the causal mechanisms leading to polarization. Recent research analyzing countermeasures to cope with increased online fragmentation, such as flagging false content and increasing exposure to opposing perspectives have predominantly yielded conflicting results, rendering these methods ineffectual (Lin et al., 2022). In this context, our research endeavors to elucidate the intricacies surrounding polarization as a concept and emphasizes the necessity to evaluate countermeasures not in isolation but in conjunction with one another, taking into account the interconnections that contribute to the polarization of societies.
Second, the findings offer crucial insights to IS researchers on how information on SNSes affects users’ perceptions and engagement with fake news. The results indicate that the perceptions have far-reaching consequences by translating to offline behaviors, creating divisive factions split on partisan views. In particular, recent research highlights the role of opinion amplification in causing extreme polarization on social networks (Lim & Bentley, 2022), which indicates a looming possibility of falsehood and polarization amplifying each other (Cinelli et al., 2021a, b; Das et al., 2023; Vicario et al., 2019) with the need to tackle falsehood becoming crucial towards breaking this vicious cycle (Das et al., 2023). In this regard, the current research prompts IS researchers to view disinformation as a nuanced phenomenon constituting several variants and sheds light on the differing effects of variants on polarization.
Lastly, the insights offer the necessary direction to IS researchers collaborating in the design of pedagogical interventions to raise awareness around political disinformation and improve digital literacy among the citizens. The study reveals that such pedagogical interventions must instill an understanding of disinformation and propaganda strategies employed by diverse entities, such as foreign governments, and its far-reaching effects on polarizing societies. In addition, these campaigns must raise awareness about hate speech and provide a balanced perspective on social media monitoring and internet censorship by highlighting both the problematic and beneficial features of such containment measures. Furthermore, falsehood and polarization extend beyond the political sphere with far-reaching impact in other domains such as healthcare (e.g., Vraga & Bode, 2020). In this regard, the findings of the study equip health IS researchers with important perspectives to utilize when countering the effect of falsehood on polarized views in the healthcare domain and aid the development of corrective mechanisms to reduce belief in health-related false information.
6.3 Implications to Practice
The findings of the study have four implications for practice. First, the results serve as crucial insights to business executives seeking to grow their businesses in an increasingly polarized environment. In this regard, polarization poses an economic risk with its destabilizing effects creating uncertainty for firms (David, 2022; Hawkins, 2023). While speaking out on societal issues in polarized nations carries a significant risk for businesses, they are expected to provide trustworthy information backed by evidence and to take a stance on topics such as climate change and discrimination (Hawkins, 2023). In this sense, our study provides crucial insights for executives to comprehend a climate of polarization based on the myriad of causal factors outlined above, and to recalibrate their messaging accordingly to avoid becoming mired in politics.
Second, our study provides empirical evidence of the complexities underpinning polarization as a phenomenon. The World Economic Forum’s 2023 global risks report identifies polarization as one of the most imminent and long-term threats, and emphasizes the role of misinformation and disinformation in accelerating the erosion of societal cohesion (WEF, 2023; Zahidi, 2023). In light of this crisis, the findings of our study prompt policymakers to incorporate a multi-faceted approach to comprehend the shaping of polarization and formulate appropriate responses to preserve societal cohesion, not only by controlling the flow of disinformation, but also by calibrating the surveillance and censorship efforts to restore citizens’ trust in institutions. In addition, the findings highlight the need for policymakers to draw legislation aimed at constantly curbing hate speech. Governments are starting to recognize the momentum hate speech gathers online and the role of social media platforms in this context (RTÉ, 2022). Given the definitive role of hate speech in polarizing societies and the role of online media in this context, the findings offer the necessary impetus for a longstanding collaboration between policymakers and social media players in curbing this menace wherein drawing out standards must be accompanied by stricter enforcement of those standards to check the spread of hate speech.
Third, the study’s findings serve as crucial insights to SNS platform players, given the immense pressure on platforms as a result of their significance as a breeding ground for disinformation, hate speech, and the exacerbation of societal divisions. As social media prevalence and everyday use increase globally (Chaffey, 2023), recent research indicates that greater social media penetration exacerbates division, leading to civil unrest and violence in societies (Faigle, 2023; Hunter et al., 2023). Amidst rising calls for SNSes to check the menace of polarization, the insights from the study offer clarity on the causal mechanisms underpinning the phenomenon and aid platform players in designing SNS platform standards and procedures for monitoring, flagging, and removal of content that threatens to advance propaganda efforts. The findings also serve as a direction for technologists to step up algorithmic efforts to curb online disinformation.
Fourth, the study offers valuable insights to political analysts seeking a delicate balance of censorship and surveillance in democratic societies. For instance, recent research indicates that censorship efforts may benefit societies affected by foreign propaganda (Golovchenko, 2022), although some bans such as the Kremlin-backed RT’s ban in the European Union to combat disinformation efforts was met with resistance from the European Federation of Journalists over concerns that such bans could restrain the freedom of expression (EFJ, 2022). In light of the need to strike a fine balance to suppress disinformation without hampering free speech, the findings drive political analysts towards a more balanced perspective on the benefits and harms of censorship and monitoring efforts by governments by suggesting that such efforts must be viewed in the context of the online information environment and related disorders, and any inferences must be recalibrated to incorporate this broader perspective.
7 Limitations and Future Research Directions
The study’s findings must be interpreted in light of three particular limitations. Given the inherent challenge of obtaining primary data for a cross-country comparison, the study begins by drawing on secondary data from reputable sources. This information enables us to cast a wide net and analyze outcomes across 177 countries, which exceeds 90 percent of the world’s nations. Moreover, considering the credible nature of the data source and the stringent data collection standards, the data are deemed suitable for performing such a study. However, future research may explore the link between polarization and propaganda operations in distinct sociocultural contexts that may display specific patterns as a result of the influence of unique externalities (e.g., the impact of the 2020 US presidential elections on disinformation operations in the country). Second, fsQCA is utilized to identify configurations associated with high and low levels of polarization in societies. In this context, it is important to note that fsQCA does not identify the unique contribution of each variable to the outcome, but rather identifies how these variables combine to result in the outcome (Pappas et al., 2019). Future research may incorporate a combination of other statistical methods with fsQCA to acquire a deeper understanding of the data, and this integration of knowledge derived from various empirical methods may lead to the development of new conceptual models relating to polarization and disinformation efforts. In addition, researchers could seek to extend the current analysis by incorporating more pertinent conditions into the fsQCA analysis in an effort to increase the explanatory power of the model (Fainshmidt et al., 2020), while regression-based techniques could allow researchers to control for the effects of specific variables on the model. Lastly, while QCA helps model conjunctural causation and sheds light on the causal complexity, conjunctural causation as a concept is separate from causality, and QCA cannot demonstrate causality on its own (Fainshmidt et al., 2020). While we attempted to alleviate these concerns subtly by incorporating a one-year lag between the conditions and the outcome, future research must investigate causal relations, including the possibility of reverse causality, using experimental methods and draw causal inferences from time series analysis using econometric methods.
8 Conclusion
ICT has been extensively recognized for its role in enhancing professional and personal life, and its impact on a range of societal dimensions, such as economic development, healthcare, education, and networking, is largely acknowledged (Oh et al., 2018). However, amidst this bright side of research focusing on ICT for development, the dark side merits scrutiny. In this context, research on polarization in society and its causal patterns is gaining pace as the phenomenon starts to spread like wildfire across nations. While past research acknowledges the role of propaganda tactics and hate speech in fueling polarization, countermeasures in the form of online social media monitoring and censorship efforts by governments have drawn severe criticism from various quarters. In the context of numerous political actors indulging in propaganda, the present study is among the first to offer a nuanced perspective of how these disinformation campaigns combined with hate speech, censorship, and surveillance efforts culminate in polarized societies. The findings from two sets of analysis highlight the crucial role of disinformation and hate speech in dividing societies on partisan grounds. The study also offers a balanced perspective on online surveillance and censorship, which may help keep polarization in check in a climate of disinformation, although such efforts may also bolster a milieu of hate rhetoric and fuel polarization. In summary, we contend that the study unravels interesting insights into the phenomena of disinformation, hate speech, online censoring, and monitoring efforts, enriching IS literature on how these phenomena thrive online and contribute to societal fragmentation.
Notes
We thank Referee #3 for this thought.
Simple causality refers to causality, which is additive, symmetric and unifinal in nature.
Liberia and Papua New Guinea did not have any data for societal polarization and hence were excluded from the analysis.
We thank Referee #2 for this thought.
References
Aanestad, M., Kankanhalli, A., Maruping, L., Pang, M.-S., & Ram, S. (2021). Digital Technologies and Social Justice. MIS Quarterly. https://misq.umn.edu/skin/frontend/default/misq/pdf/CurrentCalls/SI_DigitalTechnologies.pdf.
Ahmed, Syed Ishtiaque, Vannini, Sara, O’Neill, Jacki, & Toyama, Kentaro. (2022). Special Issue on: “Information & Communication Technologies and Development—Advances towards a better world with better technologies.” Information Systems Frontiers. https://resource-cms.springernature.com/springer-cms/rest/v1/content/20051926/data/v1. Accessed 23 Sep 2022.
Alba, Davey, & Satariano, Adam. (2019, September 26). At Least 70 Countries Have Had Disinformation Campaigns, Study Finds. The New York Times. https://www.nytimes.com/2019/09/26/technology/government-disinformation-cyber-troops.html. Accessed 6 Sep 2022.
Ali, R., Farooq, U., Arshad, U., Shahzad, W., & Beg, M. O. (2022). Hate speech detection on Twitter using transfer learning. Computer Speech & Language, 74, 101365.
Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2), 211–236.
Anderson, P. (1999). Perspective: Complexity Theory and Organization Science. Organization Science, 10(3), 216–232.
Arayankalam, J., & Krishnan, S. (2021). Relating foreign disinformation through social media, domestic online media fractionalization, government’s control over cyberspace, and social media-induced offline violence: Insights from the agenda-building theoretical perspective. Technological Forecasting and Social Change, 166, 120661.
Arbatli, E., & Rosenberg, D. (2021). United we stand, divided we rule: How political polarization erodes democracy. Democratization, 28(2), 285–307.
Arceneaux, Kevin, & Johnson, Martin. (2013). Changing Minds or Changing Channels?: Partisan News in an Age of Choice. University of Chicago Press.
Argyle, L. P., & Pope, J. C. (2022). Does Political Participation Contribute to Polarization in the United States? Public Opinion Quarterly, 86(3), 697–707.
Arnold, J. R., Reckendorf, A., & Wintersieck, A. L. (2021). Source alerts can reduce the harms of foreign disinformation. Harvard Kennedy School Misinformation Review. https://misinforeview.hks.harvard.edu/article/source-alerts-can-reduce-the-harms-of-foreign-disinformation/.
Arora, S. D., Singh, G. P., Chakraborty, A., & Maity, M. (2022). Polarization and social media: A systematic review and research agenda. Technological Forecasting and Social Change, 183, 121942.
Arsan, E. (2013). Killing Me Softly with His Words: Censorship and Self-Censorship from the Perspective of Turkish Journalists. Turkish Studies, 14(3), 447–462.
Au, C. H., Ho, K. K. W., & Chiu, D. K. W. (2021). The role of online misinformation and fake news in ideological polarization: Barriers, catalysts, and implications. Information Systems Frontiers, 1-24. https://link.springer.com/article/10.1007/s10796-021-10133-9.
Baldassarri, D., & Bearman, P. (2007). Dynamics of Political Polarization. American Sociological Review, 72(5), 784–811.
Bennett, Brian. (2021, June 9). Exclusive: Iran Steps up Efforts to Sow Discord Inside U.S. | Time. https://time.com/6071615/iran-disinformation-united-states/. Accessed 9 Sep 2022.
Bentley, C. M., Nemer, D., & Vannini, S. (2019). “When words become unclear”: Unmasking ICT through visual methodologies in participatory ICT4D. AI & Society, 34(3), 477–493.
Bentzen, Naja. (2021). Trump’s disinformation “magaphone”: Consequences, first lessons and outlook. https://policycommons.net/artifacts/1426551/trumps-disinformation-magaphone/2040994/. Accessed 21 Dec 2022.
Bilewicz, Michał, & Soral, Wiktor. (2020). Hate Speech Epidemic. The Dynamic Effects of Derogatory Language on Intergroup Relations and Political Radicalization. Political Psychology, 41(S1), 3–33.
Boxell, L., Gentzkow, M., & Shapiro, J. M. (2017). Greater Internet use is not associated with faster growth in political polarization among US demographic groups. Proceedings of the National Academy of Sciences, 114(40), 10612–10617.
Bradshaw, Samantha, & Howard, Philip N. (2018). The global organization of social media disinformation campaigns. Journal of International Affairs, 71(1.5), 23–32.
Bringula, Rex P., Catacutan-Bangit, Annaliza E., Garcia, Manuel B., Gonzales, John Paul S., & Valderama, Arlene Mae C. (2022). “Who is gullible to political disinformation?”: Predicting susceptibility of university students to fake news. Journal of Information Technology & Politics, 19(2), 165–179.
Brown, A., & Sinclair, A. (2019). The Politics of Hate Speech Laws. Routledge.
Büchi, M., Festic, N., & Latzer, M. (2022). The Chilling Effects of Digital Dataveillance: A Theoretical Model and an Empirical Research Agenda. Big Data & Society, 9(1), 20539517211065370.
Busch, A., Theiner, P., & Breindl, Y. (2018). Internet Censorship in Liberal Democracies: Learning from Autocracies? In J. Schwanholz, T. Graham, & P.-T. Stoll (Eds.), Managing Democracy in the Digital Age: Internet Regulation, Social Media Use, and Online Civic Engagement (pp. 11–28). Springer International Publishing.
Byrne, David. (1998). Complexity Theory and the Social Sciences: An Introduction. Routledge.
Byrne, D. (2005). Complexity, Configurations and Cases. Theory, Culture & Society, 22(5), 95–111.
Byrne, David, & Callaghan, Gillian. (2013). Complexity Theory and the Social Sciences: The state of the art. Routledge.
Calantone, R. J., & Vickery, S. K. (2010). Introduction to the Special Topic Forum: Using Archival and Secondary Data Sources in Supply Chain Management Research. Journal of Supply Chain Management, 46(4), 3–11.
Casola, W. R., Beall, J. M., Nils Peterson, M., Larson, L. R., Brent Jackson, S., & Stevenson, K. T. (2022). Political polarization of conservation issues in the era of COVID-19: An examination of partisan perspectives and priorities in the United States. Journal for Nature Conservation, 67, 126176.
Castaño-Pulgarín, Sergio Andrés, Suárez-Betancur, Natalia, Vega, Luz Magnolia Tilano, & López, Harvey Mauricio Herrera. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608.
Chaffey, Dave. (2023, January 30). Global social media statistics research summary 2022 [June 2022]. Smart Insights. https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/. Accessed 14 Feb 2023.
Chan, M., Yi, J., & Kuznetsov, D. (2022). Government digital repression and political engagement: A cross-national multilevel analysis examining the roles of online surveillance and censorship. The International Journal of Press/Politics, 19401612221117104. https://journals.sagepub.com/doi/abs/10.1177/19401612221117106.
Chang, C.-C., & Lin, T.-H. (2020). Autocracy login: Internet censorship and civil society in the digital age. Democratization, 27(5), 874–895.
Cinelli, Matteo, Pelicon, Andraž, Mozetič, Igor, Quattrociocchi, Walter, Novak, Petra Kralj, & Zollo, Fabiana. (2021a). Online Hate: Behavioural Dynamics and Relationship with Misinformation (arXiv:2105.14005). arXiv.
Cinelli, Matteo, Pelicon, Andraž, Mozetič, Igor, Quattrociocchi, Walter, Novak, Petra Kralj, & Zollo, Fabiana. (2021b). Dynamics of online hate and misinformation. Scientific Reports, 11(1), 1.
Claassen, C. (2020). Does Public Support Help Democracy Survive? American Journal of Political Science, 64(1), 118–134.
Cobbe, J. (2021). Algorithmic Censorship by Social Platforms: Power and Resistance. Philosophy & Technology, 34(4), 739–766.
Cohen, Raphael S., Beauchamp-Mustafaga, Nathan, Cheravitch, Joe, Demus, Alyssa, Harold, Scott W., Hornung, Jeffrey W., Jun, Jenny, Schwille, Michael, Treyger, Elina, & Vest, Nathan. (2021). Combating Foreign Disinformation on Social Media: Study Overview and Conclusions. RAND Corporation. https://www.rand.org/pubs/research_reports/RR4373z1.html.
Cohen-Almagor, R. (2014). Countering Hate on the Internet. Jahrbuch Fur Recht Und Ethik, 22, 431.
Coppedge, M., Gerring, J., Knutsen, C. H., Lindberg, S. I., Teorell, J., Alizada, N., Altman, Bernhard, M., Cornell, A., Fish, M. S., Gastaldi, L., Gjerløw, H., Glynn, A., Grahn, S., Hicken, A., Hindle, G., Ilchenko, N., Kinzelbach, K., Krusell, J., … Ziblatt, D. (2022). V-Dem Dataset v12. Varieties of Democracy (V-Dem) Project. https://doi.org/10.23696/vdemds22.
Coppedge, Michael, Gerring, John, Knutsen, Carl Henrik, Lindberg, Staffan I., Teorell, Jan, Alizada, Nazifa, Altman, David, Bernhard, Michael, Cornell, Agnes, Fish, M. Steven, Gastaldi, Lisa, Gjerløw, Haakon, Glynn, Adam, Hicken, Allen, Hindle, Garry, Ilchenko, Nina, Krusell, Joshua, L¨uhrmann, Anna, Maerz, Seraphine F., … Ziblatt, Daniel. (2021). V-Dem Dataset v11.1. Varieties of Democracy (V-Dem) Project. https://doi.org/10.23696/vdemds21.
Coppedge, Michael, Gerring, John, Knutsen, Carl Henrik, Lindberg, Staffan I., Teorell, Jan, Marquardt, Kyle L., Medzihorsky, Juraj, Pemstein, Daniel, Alizada, Nazifa, Gastaldi, Lisa, Hindle, Garry, Pernes, Josefine, Römer, Johannes von, Tzelgov, Eitan, Wang, Yi-ting, & Wilson, Steven. (2022). V-Dem Methodology v12. Varieties of Democracy (V-Dem) Project.
Dahlgren, P. (2002). In Search of the Talkative Public: Media, Deliberative Democracy and Civic Culture. Javnost - the Public, 9(3), 5–25.
Dahlgren, P. (2005). The Internet, Public Spheres, and Political Communication: Dispersion and Deliberation. Political Communication, 22(2), 147–162.
Das, A., Liu, H., Kovatchev, V., & Lease, M. (2023). The state of human-centered NLP technology for fact-checking. Information Processing & Management, 60(2), 103219.
David, Javier E. (2022, November 12). Political volatility is becoming an economic risk. Axios. https://www.axios.com/2022/11/12/politics-economy-midterms-risk-investors. Accessed 14 Feb 2023.
Davidson, Brenna M., & Kobayashi, Tetsuro. (2022). The effect of message modality on memory for political disinformation: Lessons from the 2021 U.S capitol riots. Computers in Human Behavior, 132, 107241.
de Bruin, Wändi Bruine, Saw, Htay-Wah, & Goldman, Dana P. (2020). Political polarization in US residents’ COVID-19 risk perceptions, policy preferences, and protective behaviors. Journal of Risk and Uncertainty, 61(2), 177–194.
de Groot, Kristen. (2022, February 10). Russia’s disinformation campaign against the U.S. Penn Today. https://penntoday.upenn.edu/news/russias-disinformation-campaign-against-us. Accessed 9 Sep 2022.
De Salle, Adam. (2020, June 16). ‘The Social Contract.’ Voice Magazine. https://www.voicemag.uk/blog/7302/the-social-contract. Accessed 18 Feb 2023.
Delgosha, M. S., Saheb, T., & Hajiheydari, N. (2021). Modelling the Asymmetrical Relationships between Digitalisation and Sustainable Competitiveness: A Cross-Country Configurational Analysis. Information Systems Frontiers, 23(5), 1317–1337.
Directorate-General for Communications Networks, Content and Technology (European Commission). (2018). A multi-dimensional approach to disinformation: Report of the independent High level Group on fake news and online disinformation. Publications Office of the European Union. https://data.europa.eu/doi/10.2759/739290. Accessed 9 Sep 2022.
Doctorow, Cory. (2012, March 2). Censorship is inseparable from surveillance. The Guardian. https://www.theguardian.com/technology/2012/mar/02/censorship-inseperable-from-surveillance. Accessed 14 Feb 2023.
Edgell, A. B., Boese, V. A., Maerz, S. F., Lindenfors, P., & Lindberg, S. I. (2022). The Institutional Order of Liberalization. British Journal of Political Science, 52(3), 1465–1471.
EFJ. (2022, January 3). Fighting disinformation with censorship is a mistake. European Federation of Journalists; European Federation of Journalists. https://europeanjournalists.org/blog/2022/03/01/fighting-disinformation-with-censorship-is-a-mistake/. Accessed 26 Jan 2023.
Enders, A. M., & Armaly, M. T. (2019). The Differential Effects of Actual and Perceived Polarization. Political Behavior, 41(3), 815–839.
Ertan, G., Comfort, L., & Martin, Ö. (2023). Political Polarization during Extreme Events. Natural Hazards Review, 24(1), 06022001.
Faigle, Kevin. (2023, January 18). New study shows link between social media posts and civil disobedience, violence. Jagwire. https://jagwire.augusta.edu/new-study-shows-link-between-social-media-posts-and-civil-disobedience-violence/. Accessed 14 Feb 2023.
Fainshmidt, S., Witt, M. A., Aguilera, R. V., & Verbeke, A. (2020). The contributions of qualitative comparative analysis (QCA) to international business research. Journal of International Business Studies, 51(4), 455–466.
Fiss, P. C. (2011). Building Better Causal Theories: A Fuzzy Set Approach to Typologies in Organization Research. Academy of Management Journal, 54(2), 393–420.
Frankfort-Nachmias, C., & Nachmias, D. (2007). Study guide for research methods in the social sciences (7th ed.). Worth Publications.
Frasz, Sofia. (2022). Impacts of Censorship On Political Polarization. Honors Program Theses. https://scholarship.rollins.edu/honors/175.
Freelon, D., & Wells, C. (2020). Disinformation as Political Communication. Political Communication, 37(2), 145–156.
Frenken, K. (2006). Technological innovation and complexity theory. Economics of Innovation and New Technology, 15(2), 137–155.
Furnari, S., Crilly, D., Misangyi, V. F., Greckhamer, T., Fiss, P. C., & Aguilera, R. V. (2021). Capturing Causal Complexity: Heuristics for Configurational Theorizing. Academy of Management Review, 46(4), 778–799.
Gallacher, John D., Heerdink, Marc W., & Hewstone, Miles. (2021). Online Engagement Between Opposing Political Protest Groups via Social Media is Linked to Physical Violence of Offline Encounters. Social Media + Society, 7(1), 2056305120984445.
García-Díaz, J. A., Jiménez-Zafra, S. M., García-Cumbreras, M. A., & Valencia-García, R. (2022). Evaluating feature combination strategies for hate-speech detection in Spanish using linguistic features and transformers. Complex & Intelligent Systems, 1-22. https://link.springer.com/article/10.1007/s40747-022-00693-x.
George, J., Gerhart, N., & Torres, R. (2021). Uncovering the Truth about Fake News: A Research Model Grounded in Multi-Disciplinary Literature. Journal of Management Information Systems, 38(4), 1067–1094.
Golovchenko, Y. (2022). Fighting Propaganda with Censorship: A Study of the Ukrainian Ban on Russian Social Media. The Journal of Politics, 84(2), 639–654.
González-Quiñones, F., & Machin-Mastromatteo, J. D. (2019). On media censorship, freedom of expression and the risks of journalism in Mexico. Information Development, 35(4), 666–670.
Greckhamer, T., Furnari, S., Fiss, P. C., & Aguilera, R. V. (2018). Studying configurations with qualitative comparative analysis: Best practices in strategy and organization research. Strategic Organization, 16(4), 482–495.
Grossman, Shelby, Josh A. Goldstein and. (2021, January 4). How disinformation evolved in 2020. Brookings. https://www.brookings.edu/techstream/how-disinformation-evolved-in-2020/. Accessed 23 Sep 2022.
Gruzd, A., & Roy, J. (2014). Investigating Political Polarization on Twitter: A Canadian Perspective. Policy & Internet, 6(1), 28–45.
Hai, Zuhad, & Perlman, Rebecca L. (2022). Extreme weather events and the politics of climate change attribution. Science Advances, 8(36), eabo2190.
Hatmaker, Taylor. (2022, August 26). Meta, Twitter purge accounts spreading pro-US propaganda. TechCrunch. https://social.techcrunch.com/2022/08/25/meta-and-twitter-graphika-us-influence-campaign/. Accessed 6 Sep 2022.
Hawdon, James, Ranganathan, Shyam, Leman, Scotland, Bookhultz, Shane, & Mitra, Tanushree. (2020). Social Media Use, Political Polarization, and Social Capital: Is Social Media Tearing the U.S. Apart? In Gabriele Meiselwitz (Ed.), Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis (pp. 243–260). Springer International Publishing.
Hawkins, Sara Fischer, Eleanor. (2023, January 16). Polarization eats the developed world. Axios. https://www.axios.com/2023/01/16/political-polarization-developed-world. Accessed 14 Feb 2023.
Heaney, M. T. (2020). Protest at the center of American Politics. Journal of International Affairs, 73(2), 195–208.
Ho, J., Plewa, C., & Lu, V. N. (2016). Examining strategic orientation complementarity using multiple regression analysis and fuzzy set QCA. Journal of Business Research, 69(6), 2199–2205.
Howard, J. W. (2019). Free Speech and Hate Speech. Annual Review of Political Science, 22(1), 93–109.
Humprecht, E., Esser, F., & Van Aelst, Peter. (2020). Resilience to Online Disinformation: A Framework for Cross-National Comparative Research. The International Journal of Press/politics, 25(3), 493–516.
Hunter, Lance Y., Biglaiser, Glen, McGauvran, Ronald J., & Collins, Leann. (2023). The effects of social media on domestic terrorism. Behavioral Sciences of Terrorism and Political Aggression, 0(0), 1–25.
Hurlburt, G. F. (2013). Complexity Theory: A New Paradigm for Software Integration. IT Professional, 15(3), 26–31.
Iginio, Gagliardone, Danit, Gal, Thiago, Alves, & Gabriela, Martinez. (2015). Countering online hate speech. UNESCO Publishing.
Iyengar, S., Sood, G., & Lelkes, Y. (2012). Affect, Not Ideology: A Social Identity Perspective on Polarization. Public Opinion Quarterly, 76(3), 405–431.
Jarvenpa, S. L. (1991). Panning for Gold in Information Systems Research Second-Hand Data. In Information Systems Research: Contemporary Approaches and Emergent Traditions (pp. 63–80). Elsevier Science. https://www.academia.edu/10111140/Jarvenpaa_1991_Panning_for_Gold_in_Information_Systems_Research_Second_Hand_Data. Accessed 21 Aug 2022.
Jeffrey M. McCall, opinion contributor. (2022, January 11). Ending nation’s polarization begins with understanding First Amendment [Text]. The Hill. https://thehill.com/opinion/civil-rights/589143-ending-nations-polarization-begins-with-understanding-first-amendment/. Accessed 10 Sep 2022.
Kearney, M. W. (2019). Analyzing change in network polarization. New Media & Society, 21(6), 1380–1402.
Kelly, Morgan. (2021, December 9). Political polarization and its echo chambers: Surprising new, cross-disciplinary perspectives from Princeton. Princeton University. https://www.princeton.edu/news/2021/12/09/political-polarization-and-its-echo-chambers-surprising-new-cross-disciplinary. Accessed 6 Sep 2022.
Kenny, P. D. (2020). “The Enemy of the People”: Populists and Press Freedom. Political Research Quarterly, 73(2), 261–275.
Knight Foundation. (2022, January 6). Free Expression in America Post-2020. Knight Foundation. https://knightfoundation.org/reports/free-expression-in-america-post-2020/. Accessed 10 Sep 2022.
Kojan, Lilian, Osmanbeyoglu, Hava Melike, Burbach, Laura, Ziefle, Martina, & Calero Valdez, André. (2020). Defend Your Enemy. A Qualitative Study on Defending Political Opponents Against Hate Speech Online. In Max van Duijn, Mike Preuss, Viktoria Spaiser, Frank Takes, & Suzan Verberne (Eds.), Disinformation in Open Online Media (pp. 80–94). Springer International Publishing.
Kruse, L. M., Norris, D. R., & Flinchum, J. R. (2018). Social Media as a Public Sphere? Politics on Social Media. The Sociological Quarterly, 59(1), 62–84.
Lane, Justin E., McCaffree, Kevin, & Shults, F. LeRon. (2021). Is radicalization reinforced by social media censorship? (arXiv:2103.12842). arXiv.
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096.
Leonard, N. E., Lipsitz, K., Bizyaeva, A., Franci, A., & Lelkes, Y. (2021). The nonlinear feedback dynamics of asymmetric political polarization. Proceedings of the National Academy of Sciences, 118(50), e2102149118.
Levendusky, M. (2013). Partisan Media Exposure and Attitudes Toward the Opposition. Political Communication, 30(4), 565–581.
Levin, S. A., Milner, H. V., & Perrings, C. (2021). The dynamics of political polarization. Proceedings of the National Academy of Sciences, 118(50), e2116950118.
Lim, Soo Ling, & Bentley, Peter J. (2022). Opinion amplification causes extreme polarization in social networks. Scientific Reports, 12(1), 1.
Lin, Fangshi, Teo, Hock-Hai, & Zhai, Yingda. (2022). Breaking Online Tribalism: Motivated Reasoning, Empathy and Polarization. ICIS 2022 Proceedings. https://aisel.aisnet.org/icis2022/soc_impact_is/soc_impact_is/5.
Liu, S., & Wang, D. (2021). Censorship: State control of expression. In The Routledge Handbook of Law and Society, 86-89. Routledge. https://www.taylorfrancis.com/chapters/edit/10.4324/9780429293306-15/censorship-sida-liu-di-wang.
Liu, Y., Mezei, J., Kostakos, V., & Li, H. (2017). Applying configurational analysis to IS behavioural research: A methodological alternative for modelling combinatorial complexities. Information Systems Journal, 27(1), 59–89.
Lyon, David. (2007). Surveillance Studies: An Overview. Polity.
MacCabe, J. H., Brébion, G., Reichenberg, A., Ganguly, T., McKenna, P. J., Murray, R. M., & David, A. S. (2012). Superior intellectual ability in schizophrenia: Neuropsychological characteristics. Neuropsychology, 26(2), 181–190.
Margetts, Helen. (2013). The Internet and Democracy. In William H. Dutton (Ed.), The Oxford Handbook of Internet Studies (p. 0). Oxford University Press.
Marwick, A. (2012). The Public Domain: Surveillance in Everyday Life. Surveillance & Society, 9(4), 378–393.
Matamoros-Fernández, A., & Farkas, J. (2021). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. Television & New Media, 22(2), 205–224.
McCoy, J., Rahman, T., & Somer, M. (2018). Polarization and the Global Crisis of Democracy: Common Patterns, Dynamics, and Pernicious Consequences for Democratic Polities. American Behavioral Scientist, 62(1), 16–42.
Mechkova, V., Pemstein, D., Seim, B., & Wilson, S. (2019). Digital Society Project Working Paper. 1.
Meneguelli, G., & Ferré-Pavia, C. (2021). Hate speech and social polarization in Brazil. In J. M. Aguado (Ed.), Full Professor of Media and Communication, Mobile Media Research Lab, School of Information and Media Studies, University of Murcia, Spain Hate Speech and Polarization in Participatory Society provides a timely (p. 163). Routledge. https://www.taylorfrancis.com/chapters/oa-edit/10.4324/9781003109891-13/hate-speech-socialpolarization-brazil-gisella-meneguelli-carme-ferr%C3%A9-pavia.
Merali, Y. (2006). Complexity and Information Systems: The Emergent Domain. Journal of Information Technology, 21(4), 216–228.
Meske, C., & Bunde, E. (2022). Design principles for user interfaces in AI-Based decision support systems: The case of explainable hate speech detection. Information Systems Frontiers, 1–31. https://link.springer.com/article/10.1007/s10796-021-10234-5.
Muñoz, P., Kibler, E., Mandakovic, V., & Amorós, J. E. (2022). Local entrepreneurial ecosystems as configural narratives: A new way of seeing and evaluating antecedents and outcomes. Research Policy, 51(9), 104065.
Neyazi, T. A. (2020). Digital propaganda, political bots and polarized politics in India. Asian Journal of Communication, 30(1), 39–57.
Noelle-Neumann, Elisabeth. (1993). The Spiral of Silence: Public Opinion--Our Social Skin (2d edition, Ed.). University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/S/bo3684069.html. Accessed 10 Sep 2022.
Oh, Wonseok, Acquisti, Alessandro, & Sia, Choon Ling. (2018). ICT Challenges and Opportunities in Building a “Bright Society.” Journal of the Association for Information Systems, 19(2). https://aisel.aisnet.org/jais/vol19/iss2/4.
Olan, F., Jayawickrama, U., Arakpogun, E. O., Suklan, J., & Liu, S. (2022). Fake news on social media: the Impact on Society. Information Systems Frontiers, 1-16. https://link.springer.com/article/10.1007/s10796-022-10242-z.
Olya, H. G. T., & Mehran, J. (2017). Modelling tourism expenditure using complexity theory. Journal of Business Research, 75, 147–158.
O’Regan, Catherine, & Theil, Stefan. (2020, February 26). Hate speech regulation on social media: An intractable contemporary challenge. Research Outreach. https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/. Accessed 13 Feb 2023.
Ovide, Shira. (2022, August 4). Classic Internet Censorship. The New York Times. https://www.nytimes.com/2022/08/04/technology/internet-censorship.html. Accessed 6 Sep 2022.
Oxford Internet Institute. (2021, January 13). Social media manipulation by political actors now an industrial scale problem prevalent in over 80 countries – annual Oxford report. https://www.oii.ox.ac.uk/news-events/news/social-media-manipulation-by-political-actors-now-an-industrial-scale-problem-prevalent-in-over-80-countries-annual-oxford-report. Accessed 6 Sep 2022.
Papacharissi, Z. (2004). Democracy online: Civility, politeness, and the democratic potential of online political discussion groups. New Media & Society, 6(2), 259–283.
Pappas, I, Giannakos, M, & Sampson, D. (2016a). Making Sense of Learning Analytics with a Configurational Approach. pp. pp. 42–52. https://ceur-ws.org/Vol-1579/paper7.pdf
Pappas, I. O., Giannakos, M. N., & Sampson, D. G. (2019). Fuzzy set analysis as a means to understand users of 21st-century learning systems: The case of mobile learning and reflections on learning analytics research. Computers in Human Behavior, 92, 646–659.
Pappas, I. O., Kourouthanassis, P. E., Giannakos, M. N., & Chrissikopoulos, V. (2016b). Explaining online shopping behavior with fsQCA: The role of cognitive and affective perceptions. Journal of Business Research, 69(2), 794–803.
Pappas, I. O., Papavlasopoulou, S., Mikalef, P., & Giannakos, M. N. (2020). Identifying the combinations of motivations and emotions for creating satisfied users in SNSs: An fsQCA approach. International Journal of Information Management, 53, 102128.
Pappas, I. O., & Woodside, A. G. (2021). Fuzzy-set Qualitative Comparative Analysis (fsQCA): Guidelines for research practice in Information Systems and marketing. International Journal of Information Management, 58, 102310.
Parthasarathy, B., & Ramamritham, K. (2009). Guest editorial: Information and communications technology for development. Information Systems Frontiers, 11(2), 103–104.
Patkós, V. (2023). Measuring partisan polarization with partisan differences in satisfaction with the government: The introduction of a new comparative approach. Quality & Quantity, 57(1), 39–57.
Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate Speech: A Systematized Review. SAGE Open, 10(4), 2158244020973022.
Pemstein, D., Marquardt, K. L., Tzelgov, E., Wang, Y., Medzihorsky, J., Krusell, J., Miri, F., & von Römer, J (2021). “The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data”. V-Dem Working Paper No. 21 (6th ed.). University of Gothenburg: Varieties of Democracy Institute.
Pemstein, Daniel, Marquardt, Kyle L., Tzelgov, Eitan, Wang, Yi-ting, Medzihorsky, Juraj, Krusell, Joshua, Miri, Farhad, & von Römer, Johannes. (2022). “The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data”. V-Dem Working Paper No. 21 (7th ed.). University of Gothenburg: Varieties of Democracy Institute.
Pentney, Katie. (2022). Tinker, Tailor, Twitter, Lie: Government Disinformation and Freedom of Expression in a Post-Truth Era. Human Rights Law Review, 22(2), ngac009.
Piazza, J. A. (2020). Politician hate speech and domestic terrorism. International Interactions, 46(3), 431–453.
Piazza, J. A. (2022). Fake news: The effects of social media disinformation on domestic terrorism. Dynamics of Asymmetric Conflict, 15(1), 55–77.
Polyakova, Alina, & Fried, Daniel. (2020, December 2). Democratic Offense Against Disinformation. CEPA. https://cepa.org/comprehensive-reports/democratic-offense-against-disinformation/. Accessed 21 Dec 2022.
Qureshi, I., Bhatt, B., Gupta, S., & Tiwari, A. A. (2020). Causes, symptoms and consequences of social media induced polarization (SMIP). Information Systems Journal, 11. https://onlinelibrary.wiley.com/pb-assets/assets/13652575/ISJ_SMIP_CFP.pdf.
Qureshi, Israr, Bhatt, Babita, Gupta, Samrat, & Tiwari, Amit Anand. (2022). Introduction to the Role of Information and Communication Technologies in Polarization. In Israr Qureshi, Babita Bhatt, Samrat Gupta, & Amit Anand Tiwari (Eds.), Causes and Symptoms of Socio-Cultural Polarization: Role of Information and Communication Technologies (pp. 1–23). Springer.
Ragin, C., & Davey, S. (2016). Fuzzy-Set/Qualitative Comparative Analysis (3.0).
Ragin, C. C. (2000). Fuzzy-Set Social Science. University of Chicago Press.
Ragin, Charles C. (2008). Redesigning Social Inquiry: Fuzzy Sets and Beyond. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/R/bo5973952.html.
Rasoolimanesh, S. M., Ringle, C. M., Sarstedt, M., & Olya, H. (2021). The combined use of symmetric and asymmetric approaches: Partial least squares-structural equation modeling and fuzzy-set qualitative comparative analysis. International Journal of Contemporary Hospitality Management, 33(5), 1571–1592.
Richards, Diana Eva-Ann, & Doyle, Diana Richards. (2000). Political Complexity: Nonlinear Models of Politics. University of Michigan Press.
Richards, N. M. (2013). The dangers of surveillance. Harvard Law Review, 126(7), 1934–1965.
Riemer, K., & Peter, S. (2021). Algorithmic audiencing: Why we need to rethink free speech on social media. Journal of Information Technology, 36(4), 409–426.
Rihoux, B, & Ragin, C. (2008). Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques. Sage Publications. https://methods.sagepub.com/book/configurationalcomparative-methods.
Rosas, O. V, & Serrano-Puche, J. (2018). News media and the emotional public sphere—introduction.. International Journal of Communication, 12(9). https://ijoc.org/index.php/ijoc/article/view/6785.
RTÉ. (2022). Biden says hate speech given “too much oxygen” online. RTÉ. https://www.rte.ie/news/us/2022/0916/1323692-biden-hate-speech/. Accessed 18 Sep 2022.
Ruohonen, Jukka. (2021). A Comparative Study of Online Disinformation and Offline Protests (arXiv:2106.11000). arXiv.
Saha, Manika, Varghese, Delvin, Bartindale, Tom, Thilsted, Shakuntala Haraksingh, Ahmed, Syed Ishtiaque, & Olivier, Patrick. (2022). Towards Sustainable ICTD in Bangladesh: Understanding the Program and Policy Landscape and Its Implications for CSCW and HCI. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 126,1–126:31.
Sawy, El., Omar, A., Malhotra, A., Park, YoungKi, & Pavlou, P. A. (2010). Research Commentary—Seeking the Configurations of Digital Ecodynamics: It Takes Three to Tango. Information Systems Research, 21(4), 835–848.
Schimpfössl, E., Yablokov, I., Zeveleva, O., Fedirko, T., & Bajomi-Lazar, P. (2020). Self-censorship narrated: Journalism in Central and Eastern Europe. European Journal of Communication, 35(1), 3–11.
Schneider, Carsten Q., & Wagemann, Claudius. (2012). Set-Theoretic Methods for the Social Sciences: A Guide to Qualitative Comparative Analysis. Cambridge University Press.
Serrano-Puche, J. (2021). Digital disinformation and emotions: Exploring the social risks of affective polarization. International Review of Sociology, 31(2), 231–245.
Shahbaz, Adrian, & Funk, Allie. (2019). Social Media Surveillance. Freedom House. https://freedomhouse.org/report/freedom-on-the-net/2019/the-crisis-of-social-media/social-media-surveillance. Accessed 13 Feb 2023.
Shahbaz, Adrian, & Funk, Allie. (2021). Freedom on the Net 2021: The Global Drive to Control Big Tech. Freedom House. https://freedomhouse.org/report/freedom-net/2021/global-drive-control-big-tech. Accessed 18 Sep 2022.
Shane, Scott. (2017, September 7). The Fake Americans Russia Created to Influence the Election. The New York Times. https://www.nytimes.com/2017/09/07/us/politics/russia-facebook-twitter-election.html. Accessed 18 Sep 2022.
Shu, K. (2022). Combating disinformation on social media: A computational perspective. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 2(1), 100035.
Soares, Felipe Bonow, & Recuero, Raquel. (2021). Hashtag Wars: Political Disinformation and Discursive Struggles on Twitter Conversations During the 2018 Brazilian Presidential Campaign. Social Media + Society, 7(2), 20563051211009070.
Sparkman, Gregg, Geiger, Nathan, & Weber, Elke U. (2022). Americans experience a false social reality by underestimating popular climate policy support by nearly half. Nature Communications, 13(1), 1.
Staples, William G. (2014). Everyday Surveillance: Vigilance and Visibility in Postmodern Life. Rowman & Littlefield.
Stewart, Alexander J., McCarty, Nolan, & Bryson, Joanna J. (2020). Polarization under rising inequality and economic decline. Science Advances, 6(50), eabd4201.
Stokel-Walker, Chris. (2019, October 1). The EU doesn’t really have a plan to stop its elections being hacked. Wired UK. https://www.wired.co.uk/article/eu-parliament-elections-hacking. Accessed 22 Dec 2022.
Stuenkel, Oliver. (2021, February 17). Brazil’s Polarization and Democratic Risks—Divisive Politics and Democratic Dangers in Latin America. Carnegie Endowment for International Peace. https://carnegieendowment.org/2021/02/17/brazil-s-polarization-and-democratic-risks-pub-83783. Accessed 6 Sep 2022.
Sunstein, Cass R. (2018). #Republic. https://press.princeton.edu/books/paperback/9780691180908/republic. Accessed 10 Sep 2022.
Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “Fake News”: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153.
Tennant, A., Penta, M., Tesio, L., Grimby, G., Thonnard, J.-L., Slade, A., Lawton, G., Simone, A., Carter, J., Lundgren-Nilsson, Å., Tripolski, M., Ring, H., Biering-Sørensen, F., Marincek, Č, Burger, H., & Phillips, S. (2004). Assessing and Adjusting for Cross-Cultural Validity of Impairment and Activity Limitation Scales through Differential Item Functioning within the Framework of the Rasch Model: The PRO-ESOR Project. Medical Care, 42(1), I37–I48.
The Express Tribune. (2022, September 4). Nobel laureates call for action on online disinformation. The Express Tribune. http://tribune.com.pk/story/2374780/nobel-laureates-call-for-action-on-online-disinformation. Accessed 6 Sep 2022.
Timofeev, I. N. (2014). World Order Or World Anarchy? A Look at the Modern System of International Relations (p. 44). https://russiancouncil.ru/upload/RIAC_WP_18_EN.pdf.
Törnberg, P. (2022). How digital media drive affective polarization through partisan sorting. Proceedings of the National Academy of Sciences, 119(42), e2207159119.
Trottier, Daniel. (2016). Social Media as Surveillance: Rethinking Visibility in a Converging World. Routledge.
United Nations. (2023, January 28). Hate speech: A growing, international threat | UN News. https://news.un.org/en/story/2023/01/1132597. Accessed 13 Feb 2023.
Vese, D. (2022). Governing Fake News: The Regulation of Social Media and the Right to Freedom of Expression in the Era of Emergency. European Journal of Risk Regulation, 13(3), 477–513.
Vicario, Michela Del, Quattrociocchi, Walter, Scala, Antonio, & Zollo, Fabiana. (2019). Polarization and Fake News: Early Warning of Potential Misinformation Targets. ACM Transactions on the Web, 13(2), 10:1–10:22.
Vraga, E. K., & Bode, L. (2020). Correction as a Solution for Health Misinformation on Social Media. American Journal of Public Health, 110(S3), S278–S280.
Waldron, J. (2012). The Harm in Hate Speech. Harvard University Press.
WEF. (2023, January 11). Global Risks Report 2023. World Economic Forum. https://www.weforum.org/reports/global-risks-report-2023/digest/. Accessed 14 Feb 2023.
Westcott, H., & Owen, S. (2013). Friendship and trust in the social surveillance network. Surveillance & Society, 11(3), 311–323.
Woodside, A. G. (2014). Embrace•perform•model: Complexity theory, contrarian case analysis, and multiple realities. Journal of Business Research, 67(12), 2495–2503.
Woodside, A. G. (2017). The Complexity Turn: Cultural, Management, and Marketing Applications. Springer.
Woszczynski, A. B., & Whitman, M. E. (2004). The problem of common method variance in IS research. In The handbook of Information Systems Research. Igi Global, 66–77. https://www.igiglobal.com/gateway/chapter/30343.
Zahidi, Saadia. (2023, January 12). Solving Global Crises Requires A Return To Basics | Time. Time. https://time.com/6246974/worlds-biggest-crises-protect-basic-needs/. Accessed 14 Feb 2023.
Zappin, A., Malik, H., Shakshuki, E. M., & Dampier, D. A. (2022). YouTube Monetization and Censorship by Proxy: A Machine Learning Prospective. Procedia Computer Science, 198, 23–32.
Zhai, Y., Liu, Y., Yang, S., & Song, C. (2022). A dual process model of popular support for Internet surveillance. Personality and Individual Differences, 187, 111409.
Zhu, Gaoming. (2019). Polarized China: The Effect of Media Censorship on People’s Ideology. Res Publica - Journal of Undergraduate Research, 24(1). https://digitalcommons.iwu.edu/respublica/vol24/iss1/12.
Zimdars, Melissa, & Mcleod, Kembrew. (2020). Fake News: Understanding Media and Misinformation in the Digital Age. MIT Press.
Acknowledgements
Satish Krishnan thanks the Indian Institute of Management Kozhikode’s Chair Associate Professorship for supporting this research.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A List of countries for both main and supplemental analysis
Afghanistan, Albania, Algeria, Angola, Argentina, Armenia, Australia, Austria, Azerbaijan, Bahrain, Bangladesh, Barbados, Belarus, Belgium, Benin, Bhutan, Bolivia, Bosnia and Herzegovina, Botswana, Brazil, Bulgaria, Burkina Faso, Burma/Myanmar, Burundi, Cambodia, Cameroon, Canada, Cape Verde, Central African Republic, Chad, Chile, China, Colombia, Comoros, Costa Rica, Croatia, Cuba, Cyprus, Czech Republic, Democratic Republic of the Congo, Denmark, Djibouti, Dominican Republic, Ecuador, Egypt, El Salvador, Equatorial Guinea, Eritrea, Estonia, Eswatini, Ethiopia, Fiji, Finland, France, Gabon, Georgia, Germany, Ghana, Greece, Guatemala, Guinea, Guinea Bissau, Guyana, Haiti, Honduras, Hong Kong, Hungary, Iceland, India, Indonesia, Iran, Iraq, Ireland, Israel, Italy, Ivory Coast, Jamaica, Japan, Jordan, Kazakhstan, Kenya, Kosovo, Kuwait, Kyrgyzstan, Laos, Latvia, Lebanon, Lesotho, Libya, Lithuania, Luxembourg, Madagascar, Malawi, Malaysia, Maldives, Mali, Malta, Mauritania, Mauritius, Mexico, Moldova, Mongolia, Montenegro, Morocco, Mozambique, Namibia, Nepal, Netherlands, New Zealand, Nicaragua, Niger, Nigeria, North Korea, North Macedonia, Norway, Oman, Pakistan, Palestine/Gaza, Palestine/West Bank, Panama, Paraguay, Peru, Philippines, Poland, Portugal, Qatar, Republic of the Congo, Romania, Russia, Rwanda, Sao Tome and Principe, Saudi Arabia, Senegal, Serbia, Seychelles, Sierra Leone, Singapore, Slovakia, Slovenia, Solomon Islands, Somalia, Somaliland, South Africa, South Korea, South Sudan, Spain, Sri Lanka, Sudan, Suriname, Sweden, Switzerland, Syria, Taiwan, Tajikistan, Tanzania, Thailand, The Gambia, Timor Leste, Togo, Trinidad and Tobago, Tunisia, Turkey, Turkmenistan, Uganda, Ukraine, United Arab Emirates, United Kingdom, United States of America, Uruguay, Uzbekistan, Vanuatu, Venezuela, Vietnam, Yemen, Zambia, Zanzibar, Zimbabwe; N = 177. |
Appendix B
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Vasist, P.N., Chatterjee, D. & Krishnan, S. The Polarizing Impact of Political Disinformation and Hate Speech: A Cross-country Configural Narrative. Inf Syst Front 26, 663–688 (2024). https://doi.org/10.1007/s10796-023-10390-w
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10796-023-10390-w