Due to the exponentially increasing reach of social media, it is essential to focus on its negative aspects as it can potentially divide society and incite people into violence. In this paper, we present our system description of work on the shared task ComMA@ICON, where we have to classify how aggressive the sentence is and if the sentence is gender-biased or communal biased. These three could be the primary reasons to cause significant problems in society. As team Hypers we have proposed an approach that utilizes different pretrained models with Attention and mean pooling methods. We were able to get Rank 3 with 0.223 Instance F1 score on Bengali, Rank 2 with 0.322 Instance F1 score on Multi-lingual set, Rank 4 with 0.129 Instance F1 score on Meitei and Rank 5 with 0.336 Instance F1 score on Hindi. The source code and the pretrained models of this work can be found here.
翻译:由于社交媒体的覆盖范围急剧扩大,必须关注其消极方面,因为它有可能分裂社会,煽动人们暴力。在本文中,我们提出我们系统对共同任务ComMA@ICON工作的描述,我们必须对这项共同任务COMA@ICON进行分类,我们必须对这项共同任务的工作进行分类,如果这项判决具有攻击性,如果这项判决具有性别偏见或社区偏见,这三项判决可能是造成社会重大问题的主要原因。作为团队超人,我们提出了一种利用各种预先训练的模型并带有注意和平均汇集方法的方法的方法。我们能够在孟加拉语中取得0.22分F1分的3级,在多语组中取得0.322分F1分的2级,在梅提语中取得0.129分F1分,在印地语中取得0.336分F1分的5级。这里可以找到这项工作的源代码和事先训练的模式。