With language models becoming increasingly ubiquitous, it has become essential to address their inequitable treatment of diverse demographic groups and factors. Most research on evaluating and mitigating fairness harms has been concentrated on English, while multilingual models and non-English languages have received comparatively little attention. In this paper, we survey different aspects of fairness in languages beyond English and multilingual contexts. This paper presents a survey of fairness in multilingual and non-English contexts, highlighting the shortcomings of current research and the difficulties faced by methods designed for English. We contend that the multitude of diverse cultures and languages across the world makes it infeasible to achieve comprehensive coverage in terms of constructing fairness datasets. Thus, the measurement and mitigation of biases must evolve beyond the current dataset-driven practices that are narrowly focused on specific dimensions and types of biases and, therefore, impossible to scale across languages and cultures.
翻译:随着语言模式日益普遍,必须解决语言模式对不同人口群体和因素的不公平待遇问题,大多数关于评估和减轻公平伤害的研究都集中在英语上,而多语言模式和非英语的注意力相对较少;在本文件中,我们调查了英语和多语言以外语言公平的各个方面;本文对多种语言和非英语方面的公平性进行了调查,突出了目前研究的缺点和为英语设计的方法所面临的困难;我们争辩说,世界各地多种不同的文化和语言使得在建立公平数据集方面实现全面覆盖变得不可行;因此,对偏见的衡量和缓解必须超越目前由数据集驱动的做法,即狭隘地侧重于偏见的具体层面和类型,因此不可能跨越语言和文化。</s>