We evaluate five English NLP benchmark datasets (available on the superGLUE leaderboard) for bias, along multiple axes. The datasets are the following: Boolean Question (Boolq), CommitmentBank (CB), Winograd Schema Challenge (WSC), Winogender diagnostic (AXg), and Recognising Textual Entailment (RTE). Bias can be harmful and it is known to be common in data, which ML models learn from. In order to mitigate bias in data, it is crucial to be able to estimate it objectively. We use bipol, a novel multi-axes bias metric with explainability, to quantify and explain how much bias exists in these datasets. Multilingual, multi-axes bias evaluation is not very common. Hence, we also contribute a new, large labelled Swedish bias-detection dataset, with about 2 million samples; translated from the English version. In addition, we contribute new multi-axes lexica for bias detection in Swedish. We train a SotA model on the new dataset for bias detection. We make the codes, model, and new dataset publicly available.
翻译:我们评估了五个英文NLP基准数据集(在超级GLUE领导板上可以找到)的偏差,并使用多个轴。数据集如下:Boolean 问题(Boolq)、承诺银行(CB)、Winograd Schema 挑战(WSC)、Winogender 诊断(AXg)和识别文本细节(RTE) 。Bias可能是有害的,在数据中众所周知,ML模式所学习的数据是常见的。为了减少数据中的偏差,必须能够客观地估计。我们使用新的多轴偏差度衡量标准双极(Bipool ) 来量化和解释这些数据集中存在多少偏差。多语种、多轴偏差评价并不常见。因此,我们还贡献了一个新的、有大标签的瑞典偏差检测数据集,大约200万个样本;翻译自英文版本。此外,我们为瑞典的偏差检测提供了新的多轴法。我们在新数据集上培训了一种SotA模型,用于偏差检测。