Lost in moderation: How commercial content moderation apis over- and under-moderate group-targeted hate speech and linguistic variations

Lade...
Vorschaubild
Datum
2025
Herausgeber:innen
Autor:innen
Hartmann, David
Oueslati, Amin
Staufer, Dimitri
Pohlmann, Lena
Munzert, Simon
Heuer, Hendrik
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Association for Computing Machinery
Zusammenfassung

Commercial content moderation APIs are marketed as scalable solutions to combat online hate speech. However, the reliance on these APIs risks both silencing legitimate speech, called over-moderation, and failing to protect online platforms from harmful speech, known as under-moderation. To assess such risks, this paper introduces a framework for auditing black-box NLP systems. Using the framework, we systematically evaluate five widely used commercial content moderation APIs. Analyzing five million queries based on four datasets, we find that APIs frequently rely on group identity terms, such as “black”, to predict hate speech. While OpenAI’s and Amazon’s services perform slightly better, all providers under-moderate implicit hate speech, which uses codified messages, especially against LGBTQIA+ individuals. Simultaneously, they over-moderate counter-speech, reclaimed slurs and content related to Black, LGBTQIA+, Jewish, and Muslim people. We recommend that API providers offer better guidance on API implementation and threshold setting and more transparency on their APIs’ limitations.Warning: This paper contains offensive and hateful terms and concepts. We have chosen to reproduce these terms for reasons of transparency.

Beschreibung
Schlagwörter
content moderation apis \ audit \ ai transparency and account- ability \ human-ai interaction in content moderation \ algorithmic bias in hate speech detection
Verwandte Ressource
Verwandte Ressource
Zitierform
Hartmann, D., Oueslati, A., Staufer, D., Pohlmann, L., Munzert, S., & Heuer, H. (2025). Lost in moderation: How commercial content moderation apis over- and under-moderate group-targeted hate speech and linguistic variations. Proceedings of the 2025 CHI conference on human factors in computing systems. https://doi.org/10.1145/3706598.3713998