Abstract
Misinformation can be a direct cause of radicalization due to its tendency to trigger strong emotions. Aggressive messages that arouse anxiety can be highly persuasive—messages that point to a threat, particularly one that is sensitive and socially hot, create a cognitive drive for more content about that threat and generate support for responsive action. This chapter critically examined the role that social media algorithms play in recommending extreme content. TikTok’s role in fostering radicalized content was examined by tracing how users become radicalized on TikTok and how its recommendation algorithms drive this radicalization. It identified the social, technological, and psychological factors that contribute to the radicalization of ideological biases on social media and proposed a conceptual lens through which to analyze and predict such radicalization. The results revealed that the pathways by which users access far-right content are manifold and that a large part of this can be ascribed to platform recommendations through a positive feedback loop. The results are consistent with the proposition that the generation and adoption of extreme content on TikTok largely reflect the user’s input and interaction with a platform. It is argued that some features of misinformation are likely to promote radicalization among users. It concludes how trends in artificial intelligence (AI)-based content systems are forged by an intricate combination of user interactions, platform intentions, and the interplay dynamics of a broader AI ecosystem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abul-Fottouh, D., Song, M. Y., & Gruzd, A. (2020). Examining algorithmic biases in YouTube‚ recommendations of vaccine videos. International Journal of Medical Informatics, 140. https://doi.org/10.1016/j.ijmedinf.2020.104175
Basch, C. H., Donelle, L., Fera, J., & Jaime, C. (2022). Deconstructing TikTok videos on mental health: Cross-sectional, descriptive content analysis. Journal of Medical Internet Research, 6(5), e38340. https://doi.org/10.2196/38340
Baugut, P., & Neumann, K. (2020). Online propaganda use during Islamist radicalization. Information, Communication & Society, 23(11), 1570–1592. https://doi.org/10.1080/1369118x.2019.1594333
Bhandari, A., & Bimo, S. (2022). Why’s everyone on TikTok now? The algorithmized self and the future of self-making on social media. Social Media + Society, 8(1). https://doi.org/10.1177/20563051221086241
Cervi, L., Tejedor, S., & Marín Lladó, C. (2021). TikTok and the new language of political communication. Culture, Language and Representation, 26, 267–287. https://doi.org/10.6035/clr.5817
Chaney, A., Stewart, B., & Engelhardt, B. (2018). How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. RecSys ’18: Proceedings of the 12th ACM Conference on Recommender Systems. September 2018. Pages 224–232. https://doi.org/10.1145/3240323.3240370
Cotter, K., DeCook, J., Kanthawala, S., & Foyle, K. (2022). In FYP we trust: The divine force of algorithmic conspirituality. International Journal of Communication, 16, 1–23. https://ijoc.org/index.php/ijoc/article/view/19289
Epstein, Z., Berinsky, A., Cole, R., Gully, A., Pennycook, G., & Rand, D. (2021). Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online. Harvard Kennedy School Misinformation Review, 2(3), 1–12. https://doi.org/10.37016/mr-2020-71
Faddoul, M., Chaslot, G., & Faird, H. (2020). A longitudinal analysis of YouTube’s promotion of conspiracy videos. CoRR abs/2003.03318.
Fernandez, M., & Alani, H. (2021). Artificial intelligence and online extremism. In J. McDaniel & K. Pease (Eds.), Predictive policing and artificial intelligence (Routledge frontiers of criminal justice) (pp. 132–162). Routledge. https://doi.org/10.4324/9780429265365-7
Gaudette, T. (2020). Upvoting extremism: Collective identity formation and the extreme right on Reddit. New Media & Society, 23(12), 3491–3508. https://doi.org/10.1177/1461444820958123
Haroon, M., Chhabra, A., Liu, X., Mohapatra, P., Shafiq, Z., & Wojcieszak, M. (2022). YouTube, the great radicalizer? Auditing and mitigating ideological biases in YouTube recommendations. arXiv:2203.10666v1
Harrington, N. (2013). Irrational beliefs and sociopolitical extremism. Journal of Rational-Emotional Cognitive-Behavior Therapy, 31, 167–178. https://doi.org/10.1007/s10942-013-0168-x
Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobis, M., Rothschild, D., & Watts, D. (2021). Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences, 118(32), e2101967118. https://doi.org/10.1073/pnas.2101967118
Hussein, E., Juneja, P., & Mitra, T. (2020). Measuring misinformation in video search platforms: An audit study on YouTube. Proceedings of the ACM on Human-Computer Interaction, 4. https://doi.org/10.1145/3392854
Huszar, F., et al. (2022). Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences, 119(1), e2025334119. https://doi.org/10.1073/pnas.2025334119
Jahng, M. (2021). Is fake news the new social media crisis? International Journal of Strategic Communication, 15(1), 18–36. https://doi.org/10.1080/1553118X.2020.1848842
Kaiser, J., & Rauchfleisch, A. (2019). The implications of venturing down the rabbit hole. Internet Policy Review, 8(2), 1–22.
Kaiser, J., & Rauchfleisch, A. (2020). Birds of a feather get recommended together. Social Media + Society, 6(4). https://doi.org/10.1177/2056305120969914
Ledwich, M., & Zaitsev, A. (2020). Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. First Monday, 25(3). https://doi.org/10.5210/fm.v25i3.10419
Little, O., & Richards, A. (2021). TikTok’s algorithm leads users from transphobic videos to far-right rabbit holes. Media Matters for America. https://www.mediamatters.org/tiktok/tiktoks-algorithm-leads-users-transphobicvideos-far-right-rabbit-holes
McCauley, C., & Moskalenko, S. (2008). Mechanisms of political radicalization. Terrorism and Political Violence, 20(3), 4153–4333. https://doi.org/10.1080/09546550802073367
Montag, C., Yang, H., & Elha, D. (2021). On the psychology of TikTok use: A first glimpse from empirical findings. Frontiers in Public Health, 9, 1–20. https://doi.org/10.3389/fpubh.2021.641673
Munger, K., & Phillips, J. (2022). Right-Wing YouTube: A supply and demand perspective. The International Journal of Press/Politics, 27(1), 186–219. https://doi.org/10.1177/1940161220964767
O’Connor, C. (2021). Hatescape: An in-depth analysis of extremism and hate speech on TikTok. A research report by ISD. August 2021. https://www.isdglobal.org/isd-publications.
Otero, V. (2021). What social media platforms should do about misinformation and polarization. Cornell Policy Review. http://www.cornellpolicyreview.com
Pennycook, G., & Rand, D. (2022). Accuracy prompts are a replicable and generalizable approach for reducing the spread of misinformation. Nature Communications, 13, 2333. https://doi.org/10.1038/s41467-022-30073-5
Ribeiro, M., Ottoni, R., West, R., Almeida, V., & Meira, W. (2020). Auditing radicalization pathways on YouTube. Conference on Fairness, Accountability, and Transparency, January 27–30, 2020, Barcelona, Spain. ACM. https://doi.org/10.1145/3351095.3372879
Roberts-Ingleson, E., & McCann, W. (2023). The link between misinformation and radicalization. Perspectives on Terrorism, 17(1), 36–49. https://www.jstor.org/stable/10.2307/27209215
Schmitt, J., Rieger, D., Rutkowski, O., & Ernst, J. (2018). Countermessages as prevention or promotion of extremism. Journal of Communication, 68(4), 780–808. https://doi.org/10.1093/joc/jqy029
Shin, D. (2023). Algorithms, humans, and interactions: How do algorithms interact with people? Routledge, Taylor & Francis. https://doi.org/10.1201/b23083
Shin, D., Lim, J., Ahmad, N., & Ibarahim, M. (2022). Understanding user sensemaking in fairness and transparency in algorithms. AI & Society. https://doi.org/10.1007/s00146-022-01525-9
Slater, M. D. (2007). Reinforcing spirals: The mutual influence of media selectivity and media effects and their impact on individual behavior and social identity. Communication Theory, 17(3), 281–303. https://doi.org/10.1111/j.1468-2885.2007.00296.x
Soral, W., Malinowska, K., & Bilewicz, M. (2022). The role of empathy in reducing hate speech proliferation. Two contact-based interventions in online and off-line settings. Journal of Peace Psychology, 2(3), 361–371. https://doi.org/10.1037/pac0000602
Tufekci, Z. (2018, August 14). How social media took us from Tahrir Square to Donald Trump. MIT Technology Review.
Tully, M., Bode, L., & Vraga, E. (2020). Mobilizing users? Social Media + Society, 6(4). https://doi.org/10.1177/2056305120978377
Vicario, M. D., Quattrociocchi, W., Scala, A., & Zollo, F. (2019). Polarization and fake news: Early warning of potential misinformation targets. ACM Transaction Web, 2, 174. https://doi.org/10.48550/arXiv.1802.01400
Walter, N., & Tukachinsky, R. (2020). A meta-analytic examination of the continued influence of misinformation in the face of correction. Communication Research, 47, 155–177. https://doi.org/10.1177/0093650219854600
Waters, G., & Postings, R. (2018). Spiders of the caliphate: Mapping the Islamic State’s global support network on Facebook [Report]. Counter Extremism Project. https://www.counterextremism.com/sites/default/files/Spiders%20of%20the%20Caliphate%20%28May%202018%29.pdf
Weimann, G., & Masri, N. (2020). Spreading hate on TikTok. Studies in Conflict & Terrorism, 14(5), 752–765. https://doi.org/10.1080/1057610X.2020.1780027
Whittaker, J., Looney, S., Reed, A., & Votta, F. (2021). Recommender systems and the amplification of extremist content. Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1565
Wolfowicz, M., Weisburd, D., & Hasisi, B. (2023). Examining the interactive effects of the filter bubble and the echo chamber on radicalization. Journal of Experimental Criminology, 19(1), 119–141. https://doi.org/10.1007/s11292-021-09471-0
Yesilada, M., & Lewandowsky, S. (2022). Systematic review: YouTube recommendations and problematic content. Internet Policy Review, 11(1). https://doi.org/10.14763/2022.1.1652
Zulli, D., & Zulli, D. J. (2022). Extending the Internet meme: Conceptualizing technological mimesis and imitation publics on the TikTok platform. New Media & Society. https://doi.org/10.1177/1461444820983603
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Shin, D. (2024). Misinformation, Extremism, and Conspiracies: Amplification and Polarization by Algorithms. In: Artificial Misinformation. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-52569-8_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-52569-8_3
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-031-52568-1
Online ISBN: 978-3-031-52569-8
eBook Packages: Literature, Cultural and Media StudiesLiterature, Cultural and Media Studies (R0)