Digital Library
Very Fine People
Topic:
Antisemitism & Antizionism
Principal Investigators:
Center for Technology and Society, Libby Hemphill
Study Date:
2022
Source:
Anti-Defamation League (ADL)
Key Findings:
Social media platforms provide fertile ground for white supremacist networks, enabling far-right extremists to find one another, recruit and radicalize new members, and normalize their hate. Platforms such as Facebook and Twitter use content matching and machine learning to recognize and remove prohibited speech, but to do so, they must be able to recognize white supremacist speech and agree that it should be prohibited. Critics in the press and advocacy organizations still argue that social media companies haven’t been aggressive or broad enough in removing prohibited content. There is little public conversation, however, about what white supremacist speech looks like and whether white supremacists adapt or moderate their speech to avoid detection.
The researchers found that platforms often miss discussions of conspiracy theories about white genocide, Jewish power, and malicious grievances against Jews and people of color. Platforms also let decorous but defamatory speech persist. With all their resources, platforms could do better. With all their power and influence, platforms should do better.
Key ways that white supremacist speech is distinguishable from commonplace speech:
(1) White supremacists frequently referenced racial and ethnic groups using plural noun forms (e.g., Jews, whites). Pluralizing group nouns is not in itself offensive, but in conjunction with antisemitic content or conspiracy theories, this rhetoric dehumanizes targeted groups, creates artificial distinctions, and reinforces group thinking.
(2) They appended “white” to otherwise unmarked terms (e.g., power). In doing so, they racialized issues that are not explicitly about race and made whiteness seem at risk. By adding “white” to so many terms, they center whiteness and themselves as white people in every conversation.
(3) They used less profanity than is common on social media. When white supremacists are criticized, they claim they are being civil and focus on others’ tone rather than their arguments. Avoiding profanity also allows them to avoid simplistic detection based on “offensive” language and to appear respectable.
(4) Their posts were congruent on extremist and mainstream platforms, indicating that they don’t modify their speech for general audiences or platforms. Their linguistic strategies —using plural noun forms, appending “white,” and avoiding profanity—are similar in public (Reddit and Twitter) and internal (in-group) conversations on extremist sites (Stormfront). These consistent strategies should make white supremacist posts and language more readily identifiable.
(5) Their complaints and messages stayed consistent from year to year. Their particular grievances and bugaboos change, but their general refrains do not. For instance, they discuss white decline (lately in the form of “Great Replacement” theory, codified in 2011), conspiracy theories about Jews, and pro-Trump messaging. The consistency of these topics makes them readily identifiable.
(6) They racialized Jews; they described Jews in racial rather than religious terms. Their conversations about race and Jews overlap, but their conversations about church, religion, and Jews do not.
ADL recommends that platforms use the subtle but detectable differences of white supremacist speech to improve their automated identification methods:
(1) Improve the enforcement of their own policies.
(2) Use data from extremist sites to create detection models. Automated approaches should also use computational models and workflows specific to extremist speech.
(3) Look for specific linguistic markers (plural noun forms, whiteness). Platforms need to take specific steps when preparing (that is, preprocessing) language data to capture these differences.
(4) De-emphasize profanity in toxicity detection. Platforms need to focus on the message rather than the words.
(5) Train platform moderators and algorithms to recognize that white supremacists’ conversations are dangerous and hateful. Remediations include removing violative content and referring incidents to relevant authorities when appropriate.
Methodology:
ADL researchers set out to better understand what constitutes English-language white supremacist speech online and how it differs from general or non-extremist speech. They also sought to determine whether and how white supremacists adapt their speech to avoid detection.
Commonly available computing resources were used to conduct this study, such as existing algorithms from machine learning and dynamic topic modeling. The researchers analyzed existing sets of known white supremacist speech (text only) and compared those speech patterns to general or non-extremist samples of online speech. Prior work confirms that extremists use social media to connect and radicalize, and they use specific linguistic markers to signal their group membership. They sampled data from users of the white nationalist website Stormfront and a network of “alt-right” users on Twitter. Then, they compared their posts to typical, non-extremist Reddit comments.
