Digital Library
Decoding Antisemitism An AI-driven Study on Hate Speech & Imagery Online Second Discourse Report
Topic:
Antisemitism & Antizionism, Israel & Regional Politics
Principal Investigators:
Dr. Matthias J. Becker (Principal), Dr. Daniel Allington (Co-Investigator)
Study Date:
2022
Source:
Alfred Landecker Foundation,Center for Research on Antisemitism,King's College London (KCL),Technische Universität Berlin,Decoding Antisemitism
Key Findings:
For the second discourse report on the pilot project “Decoding Antisemitism,” the research team studied in detail more than 15,000 comments, mainly coming from Facebook profiles of leading mainstream media outlets in Great Britain, France, and Germany.
Regarding responses online to the recent escalation phase of the Arab-Israeli conflict in May, the results confirm that the conflict is a central facilitator for antisemitic expressions. Even in the context of politically moderate discourses, the presence of antisemitic topoi is 12.6% in the French, 13.6% in the German, and – more than twice as much – 26.9% in the British dataset.
Analysis of web comments on the Israeli vaccination campaign (in connection with the accusation of Palestinians being excluded from the vaccine rollout) again suggests that even media stories about Israeli logistical successes that are entirely unrelated to the conflict quickly become opportunities for the articulation of antisemitic ideas and stereotypes. As with the escalation event, analysis demonstrates that antisemitism appears far more frequently in British social media debates than their French and German counterparts – but also indicates a marked difference in the types of stereotypes regularly deployed in the respective countries.
Three other discourse events on the national level were accusations of antisemitism against three prominent individuals – hailing from a diversity of political milieus and professional backgrounds – David Miller, Dieudonné M’bala M’bala and Hans-Georg Maaßen. The scrutiny of the web users’ reaction to these cases points to the remarkable adaptability of antisemitism. At the same time, antisemitism in this context functions as part of a broader process of construction of enemy images, targeting electoral rivals, political or corporate elites as well as minority groups.
The datasets coded for this report will serve as first training material for classifiers as the machine learning phase of the project gets underway. The ongoing development of such categorised datasets will help increase the accuracy of the tested algorithms.
Methodology:
The project employs a mixed methods approach involving qualitative analysis of conceptual units, language, and visual elements to detect antisemitism in web comments across different languages and contexts.
One outcome of the project is an algorithm capable of automatically identifying antisemitic statements in web comments across three languages. This algorithm aims to enhance the efficiency and accuracy of removing antisemitic posts from online spaces, including those of leading media outlets and social media platforms. The algorithm's development involves both deductive categories based on existing research and inductive categories derived from ongoing analysis of web debates in different contexts. This iterative process ensures that the algorithm remains adaptable to evolving forms of antisemitism.
To ensure the algorithm's ethical appropriacy and acceptance, the project adopts the IHRA definition of antisemitism, which is widely recognized and recommended by various states, organizations, and media.
The project refines and expands the IHRA definition to create a detailed list of antisemitic stereotypes and concepts. This list serves as the basis for categorizing antisemitic content in web comments, allowing for the analysis of even linguistically complex manifestations of antisemitism. Additionally, linguistic-semiotic categories derived from pragmalinguistic research are used to complement the conceptual categories, enabling the algorithm to identify implicit antisemitic meanings in comments.
Semiotic and visual elements, such as emojis and memes, are also taken into account in online discourse analysis, especially on social media platforms. These elements contribute to the specification or expansion of meanings in comments.
