With the rise of online hate speech, automatic detection of Hate Speech, Offensive texts as a natural language processing task is getting popular. However, very little research has been done to detect unintended social bias from these toxic language datasets. This paper introduces a new dataset ToxicBias curated from the existing dataset of Kaggle competition named "Jigsaw Unintended Bias in Toxicity Classification". We aim to detect social biases, their categories, and targeted groups. The dataset contains instances annotated for five different bias categories, viz., gender, race/ethnicity, religion, political, and LGBTQ. We train transformer-based models using our curated datasets and report baseline performance for bias identification, target generation, and bias implications. Model biases and their mitigation are also discussed in detail. Our study motivates a systematic extraction of social bias data from toxic language datasets. All the codes and dataset used for experiments in this work are publicly available
翻译:随着在线仇恨言论的兴起,仇恨言论的自动发现,攻击性文字作为自然语言处理任务的自然语言内容日益流行。然而,几乎没有开展什么研究来发现这些有毒语言数据集中无意产生的社会偏见。本文介绍了从卡格格勒竞争公司现有数据集“毒性分类中的Jigsaw Uninted Bias ”中整理出的新数据集《有毒比亚》。我们的研究旨在发现社会偏见、其类别和目标群体。数据集包含五个不同偏见类别的例子,即性别、种族/族裔、宗教、政治和男女同性恋、双性恋、双性恋、双性恋和变性者。我们利用我们整理的数据集培训以变压器为基础的模型,并报告偏见识别、目标生成和偏见影响的基线绩效。还详细讨论了模型偏见及其缓解情况。我们的研究鼓励系统地从有毒语言数据集中提取社会偏见数据。在这项工作中用于实验的所有代码和数据集都可以公开查阅。