Panel 8C: Preventing and Countering Violent Extremism Online
Chair: Dr Simon Copeland (RUSI)
Content Moderation Interventions in the Age of Borderline Social Media Content: A Bot-Powered Approach to Influence User Attitude and Engagement with Borderline Content Dr Kevin M. Blasiak (Vienna University of Technology) [Co-authors: Dr Marten Risius (The University of Queensland) & Prof Sabine Matook (The University of Queensland)] Abstract: The term “borderline content” has become prevalent in conversations about pathways to radicalization on social media. This elusive form of content presents a distinct challenge to platforms, governments, and society as it evades conventional content moderation and regulatory measures. Borderline content, cloaked in the guise of free speech, carries the potential to subtly propagate extremist ideologies, enabling extremists to broaden their audience. Consequently, it tests the effectiveness of existing countermeasures and requires a delicate balance between addressing extremism and safeguarding core civil values, including freedom of speech, privacy, and an open, censorship-free internet. However, neglecting to act upon borderline content can lead to real-world violence. Our response to this issue involves an exploratory sequential mixed-method study introducing bot-based interventions for countering borderline content. This research incorporates a comprehensive literature review, digital ethnography, and two empirical lab experiments (n=441 and n=473) to conceptualize and evaluate the efficacy of bot interventions against borderline content. Our findings emphasize the potential of bots to positively influence user attitudes when exposed to borderline content and their capacity to shape content engagement. As a result, this study underscores the necessity for tailored interventions addressing “borderline content” and carries implications for developing and implementing bot-based countermeasures. Countering Conspiracy Theories and Disinformation by Enhancing Digital Media Literacy: Meta-Analytic Evidence of Efficacy Jack Springett-Gilling (Swansea University) Abstract: The spread of conspiracy theories and disinformation has been closely linked with extremism, violence, and attacks on infrastructure. A range of measures have been deployed to tackle this issue, including interventions designed to enhance digital media literacy. These aim to improve the ability of the public to accurately discern harmful falsehoods, thus reducing the perceived credibility and the likelihood of sharing problem content. This paper used meta-analytic methods and evidence from 113 independent studies including 113,865 participants to assess whether these approaches are effective in achieving these aims. A variety of digital media literacy treatments such as taught courses, gamified approaches, educational communications, and warnings are found to be effective. Overall, treatments improved accurate discernment (d = 0.46), reduced perceived credibility (d = -0.33) and somewhat reduced the likelihood of sharing falsehoods (d = -0.17). There were no significant unwanted effects upon the credibility of factual content (d = 0.02) or the likeli- hood of sharing factual content (d = -0.01). Furthermore, meta-regression reveals fine-grain differences in which treatments work the best, what content themes they work on, and with which target audiences. These results have expected significance for policy makers, social media companies and practitioners supporting or delivering these interventions.
49
Made with FlippingBook HTML5