Text classifiers are applied at scale in the form of one-size-fits-all solutions. Nevertheless, many studies show that classifiers are biased regarding different languages and dialects. When measuring and discovering these biases, some gaps present themselves and should be addressed. First, ``Does language, dialect, and topical content vary across geographical regions?'' and secondly ``If there are differences across the regions, do they impact model performance?''. We introduce a novel dataset called GeoOLID with more than 14 thousand examples across 15 geographically and demographically diverse cities to address these questions. We perform a comprehensive analysis of geographical-related content and their impact on performance disparities of offensive language detection models. Overall, we find that current models do not generalize across locations. Likewise, we show that while offensive language models produce false positives on African American English, model performance is not correlated with each city's minority population proportions. Warning: This paper contains offensive language.
翻译:然而,许多研究表明,分类者对不同的语言和方言有偏见。在衡量和发现这些偏差时,存在一些差距,应该加以解决。首先,“语言、方言和专题内容因地理区域而异?” ;第二,“如果各地区之间有差异,它们是否影响模型性能?” 。我们引入了一个名为GeoOLID的新数据集,在15个地理和人口分布不同的城市中,有14 000多个实例来处理这些问题。我们全面分析与地理有关的内容及其对攻击性语言探测模型性能差异的影响。总的来说,我们发现目前的模式并不在各地通用。同样,我们显示,虽然攻击性语言模型在非洲裔美国人英语上产生假正数,但模型性能与每个城市的少数民族人口比例并不相关。警告:本文包含冒犯性语言。