Large language models (LLMs) are widely deployed for open-ended communication, yet most bias evaluations still rely on English, classification-style tasks. We introduce DebateBias-8K, a new multilingual, debate-style benchmark designed to reveal how narrative bias appears in realistic generative settings. Our dataset includes 8,400 structured debate prompts spanning four sensitive domains: women's rights, socioeconomic development, terrorism, and religion, across seven languages ranging from high-resource (English, Chinese) to low-resource (Swahili, Nigerian Pidgin). Using four flagship models (GPT-4o, Claude 3, DeepSeek, and LLaMA 3), we generate and automatically classify over 100,000 responses. Results show that all models reproduce entrenched stereotypes despite safety alignment: Arabs are overwhelmingly linked to terrorism and religion (>=95%), Africans to socioeconomic "backwardness" (up to <=77%), and Western groups are consistently framed as modern or progressive. Biases grow sharply in lower-resource languages, revealing that alignment trained primarily in English does not generalize globally. Our findings highlight a persistent divide in multilingual fairness: current alignment methods reduce explicit toxicity but fail to prevent biased outputs in open-ended contexts. We release our DebateBias-8K benchmark and analysis framework to support the next generation of multilingual bias evaluation and safer, culturally inclusive model alignment.
翻译:大语言模型(LLMs)被广泛部署于开放式交流场景,然而大多数偏见评估仍依赖英语分类式任务。我们提出了DebateBias-8K,一个新颖的多语言辩论式基准,旨在揭示叙事偏见如何在现实生成场景中显现。我们的数据集包含8,400个结构化辩论提示,涵盖四个敏感领域:女性权利、社会经济发展、恐怖主义和宗教,涉及七种语言,从高资源语言(英语、中文)到低资源语言(斯瓦希里语、尼日利亚皮钦语)。使用四个旗舰模型(GPT-4o、Claude 3、DeepSeek和LLaMA 3),我们生成并自动分类了超过100,000个响应。结果显示,尽管进行了安全对齐,所有模型仍再现了根深蒂固的刻板印象:阿拉伯人被过度关联于恐怖主义和宗教(≥95%),非洲人被关联于社会经济“落后性”(高达≤77%),而西方群体则被一贯描绘为现代或进步。偏见在低资源语言中急剧增加,表明主要在英语中训练的对齐方法无法全球泛化。我们的研究结果突显了多语言公平性中的持续鸿沟:当前的对齐方法减少了显性毒性,但未能防止开放式语境中的偏见输出。我们发布DebateBias-8K基准和分析框架,以支持下一代多语言偏见评估及更安全、文化包容的模型对齐。