As language models grow in popularity, their biases across all possible markers of demographic identity should be measured and addressed in order to avoid perpetuating existing societal harms. Many datasets for measuring bias currently exist, but they are restricted in their coverage of demographic axes, and are commonly used with preset bias tests that presuppose which types of biases the models exhibit. In this work, we present a new, more inclusive dataset, HOLISTICBIAS, which consists of nearly 600 descriptor terms across 13 different demographic axes. HOLISTICBIAS was assembled in conversation with experts and community members with lived experience through a participatory process. We use these descriptors combinatorially in a set of bias measurement templates to produce over 450,000 unique sentence prompts, and we use these prompts to explore, identify, and reduce novel forms of bias in several generative models. We demonstrate that our dataset is highly efficacious for measuring previously unmeasurable biases in token likelihoods and generations from language models, as well as in an offensiveness classifier. We will invite additions and amendments to the dataset, and we hope it will help serve as a basis for easy-to-use and more standardized methods for evaluating bias in NLP models.
翻译:随着语言模式越来越受欢迎,应衡量和处理其在人口特征的所有可能标志上的偏见,以避免使现有的社会伤害永久化。许多衡量偏见的数据集目前存在,但限于人口轴的覆盖范围,通常使用预先设定的偏见测试,这些测试预先假定模型所展示的偏见类型。在这项工作中,我们提出了一个新的、更具包容性的数据集,HOLISTICABIASIASTASI,由13个不同人口轴的近600个描述性术语组成。HOLISTICBICBIATIATIS在与专家和社区成员交谈时聚集在一起,通过一个参与性过程,与有丰富经验的专家和社区成员进行交谈。我们用一套偏差计量模板进行分类,以产生超过45万个独特的词提示,我们用这些提示来探索、确定和减少一些基因模型中的新形式的偏见。我们证明我们的数据集非常有效,可以测量过去无法衡量象征性可能性和语言模型代代代代之间无法衡量的偏见,以及攻击性分类。我们将邀请对数据集进行增补和修正,我们希望它能够作为比较标准化的模型的基础。