Digital Library
Block/Filter/Notify Support for Targets of Online Hate Report Card
Topic:
Antisemitism & Antizionism
Principal Investigators:
ADL Center for Technology and Society
Study Date:
2023
Source:
Center for Technology and Society,Anti-Defamation League (ADL)
Key Findings:
This report evaluates nine major social media platforms on their effectiveness in protecting users from online hate and harassment. It assesses whether these platforms provide essential tools for users facing abuse, measuring them across 11 key features within five categories: (1) Communication with Targets; (2) Support for Targets of Networked Harassment; (3) Blocking Features; (4) Muting Features; and (5) Filtering Features.
Each platform was graded based on how many of these protective features they implemented:
-Twitch: B (9/11 features implemented) – The highest-scoring platform, Twitch has robust moderation tools and effective user controls.
-Instagram, TikTok, and YouTube: C (7/11 features implemented) – These platforms provide some protective tools but still lack essential functionalities.
-Twitter (X) and Facebook: C- (6/11 features implemented) – Although widely used, these platforms fall short in adequately supporting targets of online hate.
-Discord and Reddit: D (5/11 features implemented) – These platforms struggle with providing sufficient blocking, muting, and notification features.
-Snapchat: F (2/11 features implemented) – The worst performer, Snapchat offers minimal tools for users to protect themselves from harassment.
Recommendations
The report highlights a gap between social media platforms' stated commitments to user safety and their actual implementation of support tools. Many platforms lack transparency in notifying users about actions taken in response to harassment, and there is inadequate support for users facing coordinated, large-scale hate attacks.
To improve user safety, ADL recommends that platforms:
(1) Adopt the 11 essential features across all categories.
(2) Engage with civil society groups and marginalized communities to improve safety measures.
(3) Invest in safety tools to enhance blocking, muting, and filtering capabilities.
(4) Increase transparency in how they handle abuse reports and communicate with affected users.
Methodology:
To evaluate how and whether people are being protected from online hate and harassment, ADL Center for Technology and Society (CTS) reviewed how nine tech companies currently support people targeted by these harmful and traumatizing experiences on their platforms.
CTS reviewed recommendations from PEN America’s report on social media abuse reporting (released in 2023), ADL’s social pattern library (released in 2022), UNESCO and ICFJ’s study of online violence against women journalists (released in 2022), The World Wide Web Foundation report on Online Gender-Based Violence (released in 2021), PEN America’s report on platform mitigation features for the experience of targets of online harassment (released in 2021), ADL’s qualitative study of targets of online abuse (released in 2019), and thousands of responses from targets of online hate as reflected in the ADL Online Hate and Harassment surveys from 2019, 2020, 2021, 2022 and 2023.
Based on these reports, CTS compiled a list of eleven platform features necessary to protect targets of online hate, all of which were previously recommended in at least four of these resources. Each of these features is rooted in the experience of targets of online harassment and the ways targets wished they were supported when facing hate and harassment on social media.
