In order for NLP technology to be widely applicable, fair, and useful, it needs to serve a diverse set of speakers across the world's languages, be equitable, i.e., not unduly biased towards any particular language, and be inclusive of all users, particularly in low-resource settings where compute constraints are common. In this paper, we propose an evaluation paradigm that assesses NLP technologies across all three dimensions. While diversity and inclusion have received attention in recent literature, equity is currently unexplored. We propose to address this gap using the Gini coefficient, a well-established metric used for estimating societal wealth inequality. Using our paradigm, we highlight the distressed state of current technologies for Indian (IN) languages (a linguistically large and diverse set, with a varied speaker population), across all three dimensions. To improve upon these metrics, we demonstrate the importance of region-specific choices in model building and dataset creation, and more importantly, propose a novel, generalisable approach to optimal resource allocation during fine-tuning. Finally, we discuss steps to mitigate these biases and encourage the community to employ multi-faceted evaluation when building linguistically diverse and equitable technologies.
翻译:为了使 NLP 技术具有广泛的适用性、公平性和实用性,它需要为全球语言中的多样化的使用者提供服务,具有公平性,即不偏袒任何特定的语言,并包容所有用户,特别是在计算受限制的低资源环境下。本文提出了一种评估范式,以对所有三个维度的 NLP 技术进行评估。尽管多样性和包容性在最近的文献中受到了关注,但公平性目前尚未得到探索。我们建议使用洪武系数,这是一种用于估算社会财富不平等的公认度量标准。使用我们的范式,我们突出了目前适用于印度语言 (IN) 的技术在所有三个维度上的贫困状态(印度语言是一种语言庞大多样,使用者群体多样化的语言)。为了改善这些指标,我们展示了地区特定选择在模型构建和数据集创建方面的重要性,更重要的是,我们提出了一种新颖的、可推广的方法来进行优化资源分配时的微调。最后,我们讨论了缓解这些偏见的措施,并鼓励社区在构建语言多样和公平的技术时采用多方面的评估。