It has been observed that large-scale language models capture undesirable societal biases, e.g. relating to race and gender; yet religious bias has been relatively unexplored. We demonstrate that GPT-3, a state-of-the-art contextual language model, captures persistent Muslim-violence bias. We probe GPT-3 in various ways, including prompt completion, analogical reasoning, and story generation, to understand this anti-Muslim bias, demonstrating that it appears consistently and creatively in different uses of the model and that it is severe even compared to biases about other religious groups. For instance, "Muslim" is analogized to "terrorist" in 23% of test cases, while "Jewish" is mapped to "money" in 5% of test cases. We quantify the positive distraction needed to overcome this bias with adversarial text prompts, and find that use of the most positive 6 adjectives reduces violent completions for "Muslims" from 66% to 20%, but which is still higher than for other religious groups.
翻译:人们注意到,大规模语言模式反映了不受欢迎的社会偏见,例如种族和性别方面的偏见;但宗教偏见相对而言没有被探索。我们证明GPT-3,一种最先进的背景语言模式,捕捉了持续的穆斯林暴力偏见。我们以各种方式探测GPT-3,包括迅速完成、模拟推理和故事生成,以理解这种反穆斯林偏见,表明这种偏见在不同用途上是一贯和创造性的,甚至与其他宗教团体的偏见相比,它也非常严重。例如,在23%的测试案例中,“穆斯林”被比作“恐怖主义 ”,而在5%的测试案例中,“犹太人”被比作“金钱 ” 。我们量化了克服这种偏见所需的积极分心,用对抗性文字提示,发现使用最正面的6个形容词将“穆斯林”的暴力完成率从66%降低到20%,但仍然高于其他宗教团体。