Online hatred is a growing concern on many social media platforms. To address this issue, different social media platforms have introduced moderation policies for such content. They also employ moderators who can check the posts violating moderation policies and take appropriate action. Academicians in the abusive language research domain also perform various studies to detect such content better. Although there is extensive research in abusive language detection in English, there is a lacuna in abusive language detection in low resource languages like Hindi, Urdu etc. In this FIRE 2021 shared task - "HASOC- Abusive and Threatening language detection in Urdu" the organizers propose an abusive language detection dataset in Urdu along with threatening language detection. In this paper, we explored several machine learning models such as XGboost, LGBM, m-BERT based models for abusive and threatening content detection in Urdu based on the shared task. We observed the Transformer model specifically trained on abusive language dataset in Arabic helps in getting the best performance. Our model came First for both abusive and threatening content detection with an F1scoreof 0.88 and 0.54, respectively.
翻译:许多社交媒体平台日益关注网上仇恨问题。为了解决这一问题,不同的社交媒体平台已经针对此类内容引入了温和政策。他们还聘用了能够检查违反温和政策职位并采取适当行动的主持人。滥用语言研究领域的学者也开展了各种研究,以更好地检测此类内容。虽然在英语滥用语言检测方面进行了广泛研究,但在以印地语、乌尔都语等低资源语言的滥用语言检测方面存在一个空白。在FIRE 2021共同的任务中,组织者提出了在乌尔都州建立滥用语言检测数据集以及威胁语言检测。在本文中,我们探讨了一些机器学习模式,如XGboost、LGBM、M-BERT基于基于共同任务的滥用和威胁内容检测模型。我们观察到了在阿拉伯语滥用语言数据集方面受过专门培训的变换模式,这有助于取得最佳业绩。我们的模型是分别以0.88和0.54的F1核心进行滥用和威胁内容检测的。