Numerous machine learning (ML) and deep learning (DL)-based approaches have been proposed to utilize textual data from social media for anti-social behavior analysis like cyberbullying, fake news detection, and identification of hate speech mainly for highly resourced languages, e.g., English. However, despite having a lot of diversity and millions of native speakers, some languages such as Bengali are under-resourced, which is due to a lack of computational resources for natural language processing (NLP). Like other languages, Bengali social media content also includes images along with texts (e.g., multimodal content is posted by embedding short texts into images on Facebook), only the textual data is not enough to judge them since images might give extra context to make a proper judgment. This paper is about hate speech detection from multimodal Bengali memes and texts. We prepared the only multimodal hate speech dataset for-a-kind of problems for Bengali, which we use to train state-of-the-art neural architectures (e.g., Bi-LSTM/Conv-LSTM with word embeddings, ConvNets + pre-trained language models (PLMs), e.g., monolingual Bangla BERT, multilingual BERT-cased/uncased, and XLM-RoBERTa) that jointly analyze textual and visual information for hate speech detection. Conv-LSTM and XLM-RoBERTa models performed best for texts, yielding F1 scores of 0.78 and 0.82, respectively. As of memes, ResNet-152 and DenseNet-161 models yield F1 scores of 0.78 and 0.79, respectively. As for multimodal fusion, XML-RoBERTa + DenseNet-161 performed the best, yielding an F1 score of 0.83. Our study suggests that text modality is most useful for hate speech detection, while memes are moderately useful. Further, to foster reproducible research, we plan to make available datasets, source codes, models, and notebooks
翻译:许多机器学习(ML)和深层次学习(DL)都提议使用来自社交媒体的文字数据进行反社会行为分析,如网络欺凌、假新闻探测和识别仇恨言论,主要针对资源丰富的语言,例如英语。然而,尽管有许多多样性和数百万本地语言,孟加拉语等一些语言资源不足,原因是缺乏用于自然语言处理(NLP)的计算资源。 与其他语言一样,孟加拉语社交媒体内容也包含图像和文本(例如,通过将短文本嵌入脸书,将多式文本张贴在网上),只有文本数据不足以判断它们,因为图像可能为做出适当的判断提供更多背景。尽管多式孟加拉语调和数以百万计的当地语言,但我们只准备了多式仇恨言论数据集作为孟加拉语处理(NLP )的一类问题,我们用这些模型来培训最先进的神经仇恨言论结构(例如,B-SLTM/CLTM 和以字嵌入的文字、CONNet-ROMM 和B-MER 之前的货币、OLM-RO-M-R-R) 和ODR-S-R-SDR-SDRM-SDRM-RM-modeal-real-modeal-mode-modes、我们最能、ODR-mode、O-mode、O-mode、M-mode、O-mode、O-modal-SDRM-modes、O-mod Tex-modes、M-modes、ODRIS-modes、ODRM-mod-modes、ODR-modal-mod 和M-modes、ODR-s-s-s-s-s-s-s-s-s-s-s-mod-s-s-mod-mod-mod-mod-modal-s-s-modal-modal-modal-modal-modal-s-s-mod-mod-mod-mod-mod-mod-s-s-s-s-s-mod-mod-mod-mod-mod-s-mod-s-mod-mod-