Offensive language detection is increasingly crucial for maintaining a civilized social media platform and deploying pre-trained language models. However, this task in Chinese is still under exploration due to the scarcity of reliable datasets. To this end, we propose a benchmark --COLD for Chinese offensive language analysis, including a Chinese Offensive Language Dataset --COLDATASET and a baseline detector --COLDETECTOR which is trained on the dataset. We show that the COLD benchmark contributes to Chinese offensive language detection which is challenging for existing resources. We then deploy the COLDETECTOR and conduct detailed analyses on popular Chinese pre-trained language models. We first analyze the offensiveness of existing generative models and show that these models inevitably expose varying degrees of offensive issues. Furthermore, we investigate the factors that influence the offensive generations, and we find that anti-bias contents and keywords referring to certain groups or revealing negative attitudes trigger offensive outputs easier.
翻译:攻击性语言探测对于维护文明社交媒体平台和部署经过培训的语言模型越来越重要,然而,由于缺少可靠的数据集,中国的这一任务仍在探索之中。为此,我们提议了中国进攻性语言分析基准 -- -- COLD,包括中国进攻性语言数据集 -- -- COLDATASET和基线探测器 -- -- COLDETECTOR,该基准经过关于数据集的培训。我们发现,COLD基准有助于中国进攻性语言探测,这对现有资源来说具有挑战性。我们随后部署COLDETECtor,并对中国经过培训的流行语言模型进行详细分析。我们首先分析现有基因模型的进攻性,并表明这些模型不可避免地暴露不同程度的攻击性问题。此外,我们调查了影响进攻性世代的因素,发现提及某些群体的反偏见内容和关键词或暴露负面态度更容易触发进攻性产出。