20 Jun 2023

[Publication] Combating Hate: How Multilingual Transformers Can Help Detect Topical Hate Speech

Paper by Trishanta Srikissoon and Vukosi Marivate

Members

Trishanta Srikissoon, Vukosi Marivate

Abstract

Automated hate speech detection is important to protecting people’s dignity, online experiences, and physical safety in Society 5.0. Transformers are sophisticated pre- trained language models that can be fine-tuned for multilingual hate speech detection. Many studies consider this application as a binary classification problem. Additionally, research on topical hate speech detection use target-specific datasets containing assertions about a particular group. In this paper we investigate multi-class hate speech detection using target-generic datasets. We assess the performance of mBERT and XLM-RoBERTA on high and low resource languages, with limited sample sizes and class imbalance. We find that our fine-tuned mBERT models are performant in detecting gender-targeted hate speech. Our Urdu classifier produces a 31% lift on the baseline model. We also present a pipeline for processing multilingual datasets for multi-class hate speech detection. Our approach could be used in future works on topically focused hate speech detection for other low resource languages, particularly African languages which remain under-explored in this domain.

Publications

** Trishanta Srikissoon and Vukosi Marivate. Combating Hate: How Multilingual Transformers Can Help Detect Topical Hate Speech, Proceedings of Society 5.0 Conference 2023. 2023. [NLP] <> [Paper URL] DOI: 10.29007/1cm6