Evaluation of biases in language models is often limited to synthetically generated datasets. This dependence traces back to the need for a prompt-style dataset to trigger specific behaviors of language models. In this paper, we address this gap by creating a prompt dataset with respect to occupations collected from real-world natural sentences present in Wikipedia. We aim to understand the differences between using template-based prompts and natural sentence prompts when studying gender-occupation biases in language models. We find bias evaluations are very sensitive to the design choices of template prompts, and we propose using natural sentence prompts for systematic evaluations to step away from design choices that could introduce bias in the observations.
翻译:对语言模式偏差的评价往往局限于合成生成的数据集。 这种依赖性可追溯到需要迅速式的数据集来触发语言模式的具体行为。 在本文中,我们通过建立从维基百科中真实世界自然句子中收集的职业的迅速数据集来弥补这一差距。我们的目标是在研究语言模式中的性别-职业偏见时了解使用基于模板的提示和自然句子的区别。我们发现偏见评价对模板的快速设计选择非常敏感,我们建议使用自然句子来促使系统评估,从而摆脱设计选择,从而在观察中引入偏见。